url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/23775
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23775/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23775/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23775/events
|
https://github.com/huggingface/transformers/pull/23775
| 1,726,555,174 |
PR_kwDOCUB6oc5RYk0B
| 23,775 |
expose safe_serialization argument in the pipeline API
|
{
"login": "yessenzhar",
"id": 8552242,
"node_id": "MDQ6VXNlcjg1NTIyNDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8552242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yessenzhar",
"html_url": "https://github.com/yessenzhar",
"followers_url": "https://api.github.com/users/yessenzhar/followers",
"following_url": "https://api.github.com/users/yessenzhar/following{/other_user}",
"gists_url": "https://api.github.com/users/yessenzhar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yessenzhar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yessenzhar/subscriptions",
"organizations_url": "https://api.github.com/users/yessenzhar/orgs",
"repos_url": "https://api.github.com/users/yessenzhar/repos",
"events_url": "https://api.github.com/users/yessenzhar/events{/privacy}",
"received_events_url": "https://api.github.com/users/yessenzhar/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,685 | 1,685 | 1,685 |
CONTRIBUTOR
| null |
# What does this PR do?
expose safe_serialization argument of PreTrainedModel and TFPreTrainedModel in the save_pretrained of the pipeline API
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@Narsil
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23775/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23775/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23775",
"html_url": "https://github.com/huggingface/transformers/pull/23775",
"diff_url": "https://github.com/huggingface/transformers/pull/23775.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23775.patch",
"merged_at": 1685978399000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23774
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23774/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23774/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23774/events
|
https://github.com/huggingface/transformers/pull/23774
| 1,726,483,600 |
PR_kwDOCUB6oc5RYVJ-
| 23,774 |
Fix RWKV backward on GPU
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,685 | 1,685 | 1,685 |
COLLABORATOR
| null |
# What does this PR do?
Fixes the backward pass for RWKV on GPU. The function backward was not adapted to the revamp of the forward, my bad.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23774/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23774/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23774",
"html_url": "https://github.com/huggingface/transformers/pull/23774",
"diff_url": "https://github.com/huggingface/transformers/pull/23774.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23774.patch",
"merged_at": 1685104397000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23773
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23773/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23773/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23773/events
|
https://github.com/huggingface/transformers/issues/23773
| 1,726,395,624 |
I_kwDOCUB6oc5m5rTo
| 23,773 |
Implement DINO V2
|
{
"login": "Lime-Cakes",
"id": 91322985,
"node_id": "MDQ6VXNlcjkxMzIyOTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/91322985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lime-Cakes",
"html_url": "https://github.com/Lime-Cakes",
"followers_url": "https://api.github.com/users/Lime-Cakes/followers",
"following_url": "https://api.github.com/users/Lime-Cakes/following{/other_user}",
"gists_url": "https://api.github.com/users/Lime-Cakes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lime-Cakes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lime-Cakes/subscriptions",
"organizations_url": "https://api.github.com/users/Lime-Cakes/orgs",
"repos_url": "https://api.github.com/users/Lime-Cakes/repos",
"events_url": "https://api.github.com/users/Lime-Cakes/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lime-Cakes/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] |
[
"Same as issue as mentioned at #23739 ",
"> Same as issue as mentioned at #23739\r\n\r\nOops, I didn't notice. You want to port the weight/code over? Will likely have to add a small layer or two to transformer library and write a weight convert script just like https://github.com/huggingface/transformers/blob/main/src/transformers/models/vit/convert_dino_to_pytorch.py",
"> > Same as issue as mentioned at #23739\n> \n> \n> \n> Oops, I didn't notice. You want to port the weight/code over? Will likely have to add a small layer or two to transformer library and write a weight convert script just like https://github.com/huggingface/transformers/blob/main/src/transformers/models/vit/convert_dino_to_pytorch.py\n\nIf you don't mind I'd like to take this one. Also, thanks for the tips I'll take a look at the reference you mentioned ",
"> > > Same as issue as mentioned at #23739\r\n> > \r\n> > \r\n> > Oops, I didn't notice. You want to port the weight/code over? Will likely have to add a small layer or two to transformer library and write a weight convert script just like https://github.com/huggingface/transformers/blob/main/src/transformers/models/vit/convert_dino_to_pytorch.py\r\n> \r\n> If you don't mind I'd like to take this one. Also, thanks for the tips I'll take a look at the reference you mentioned\r\n\r\nSure, please do. I look forward to it! If there's any new layer implemented, will you also add the corresponding flax implementation for those new layer?",
"Added [here](https://github.com/huggingface/transformers/pull/24016)! Although on top of it, today Meta released some additional dinov2 models for semantic image segmentation and depth estimation: https://twitter.com/MetaAI/status/1697233915331879329, and it looks like currently, transformers only supports [feature extraction](https://github.com/huggingface/transformers/blob/9c5acca0028b550e1328ba7e2f16418fe0a0c634/src/transformers/models/dinov2/modeling_dinov2.py#L587) and [image classification](https://github.com/huggingface/transformers/blob/9c5acca0028b550e1328ba7e2f16418fe0a0c634/src/transformers/models/dinov2/modeling_dinov2.py#L676).\r\n\r\ncc @NielsRogge @fxmarty\r\n\r\n(should we close this issue and open a new one?)\r\n",
"Seems like a new issue should be open. This one should be close since DINOv2 PR is merged. I just forgot."
] | 1,685 | 1,693 | 1,693 |
NONE
| null |
### Model description
Code and model is available here: https://github.com/facebookresearch/dinov2
Full paper here: https://arxiv.org/abs/2304.07193
The implementation seems fairly simple. Most layers is already implemented within transformers library (it's just a ViT). There's some changes compared to DINO (which is implemented already), such as SwiGLU and LayerScale. According to #20403, SwiGLU is already implemented, though, the original code uses xformers's SwiGLU.
DINO V2 also have a different license as listed here: https://github.com/facebookresearch/dinov2/blob/main/LICENSE
It is NonCommercial.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_
If there's no issue with license, I can make a PR for the model.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23773/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23773/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23768
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23768/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23768/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23768/events
|
https://github.com/huggingface/transformers/issues/23768
| 1,726,277,729 |
I_kwDOCUB6oc5m5Ohh
| 23,768 |
use_fast=False when loading OPT's tokenizer?
|
{
"login": "jiangwangyi",
"id": 39762734,
"node_id": "MDQ6VXNlcjM5NzYyNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/39762734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiangwangyi",
"html_url": "https://github.com/jiangwangyi",
"followers_url": "https://api.github.com/users/jiangwangyi/followers",
"following_url": "https://api.github.com/users/jiangwangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/jiangwangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiangwangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiangwangyi/subscriptions",
"organizations_url": "https://api.github.com/users/jiangwangyi/orgs",
"repos_url": "https://api.github.com/users/jiangwangyi/repos",
"events_url": "https://api.github.com/users/jiangwangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiangwangyi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @ArthurZucker ",
"Hey! Thanks for reporting. Pretty sure the doc is wrong, but `use_fast=True` use to not be supported for OPT, which could explain this. "
] | 1,685 | 1,685 | 1,685 |
CONTRIBUTOR
| null |
### System Info
platform==Ubuntu 18.04.01
python==3.10
transformers==4.29.1
### Who can help?
@sgugger @stevhliu @MK
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
It is shown in the OPT model [documentation](https://huggingface.co/docs/transformers/model_doc/opt) (in Tips) that it is required to pass `use_fast=False` when loading a OPT tokenizer, as OPT tokenizer will add `</s>` to the beginning of every prompt.
I made a trial:
```python
>>> import transformers
>>> tokenizer = transformers.AutoTokenizer.from_pretrained("facebook/opt-1.3b", use_fast=False)
>>> tokenizer_fast = transformers.AutoTokenizer.from_pretrained("facebook/opt-1.3b", use_fast=True)
>>> text = "I like you."
>>> tokenizer(text)
>>> {'input_ids': [2, 100, 101, 47, 4], 'attention_mask': [1, 1, 1, 1, 1]}
>>> tokenizer_fast(text)
>>> {'input_ids': [2, 100, 101, 47, 4], 'attention_mask': [1, 1, 1, 1, 1]}
```
`</s>` is correctly added and no difference is observed.
### Expected behavior
Is the tips wrong or in some other cases `use_fast=Fast` is actually required?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23768/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23768/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23767
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23767/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23767/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23767/events
|
https://github.com/huggingface/transformers/pull/23767
| 1,726,243,656 |
PR_kwDOCUB6oc5RXguF
| 23,767 |
Bump tornado from 6.0.4 to 6.3.2 in /examples/research_projects/visual_bert
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,685 | 1,685 | 1,685 |
CONTRIBUTOR
| null |
Bumps [tornado](https://github.com/tornadoweb/tornado) from 6.0.4 to 6.3.2.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/tornadoweb/tornado/blob/master/docs/releases.rst">tornado's changelog</a>.</em></p>
<blockquote>
<h1>Release notes</h1>
<p>.. toctree::
:maxdepth: 2</p>
<p>releases/v6.3.2
releases/v6.3.1
releases/v6.3.0
releases/v6.2.0
releases/v6.1.0
releases/v6.0.4
releases/v6.0.3
releases/v6.0.2
releases/v6.0.1
releases/v6.0.0
releases/v5.1.1
releases/v5.1.0
releases/v5.0.2
releases/v5.0.1
releases/v5.0.0
releases/v4.5.3
releases/v4.5.2
releases/v4.5.1
releases/v4.5.0
releases/v4.4.3
releases/v4.4.2
releases/v4.4.1
releases/v4.4.0
releases/v4.3.0
releases/v4.2.1
releases/v4.2.0
releases/v4.1.0
releases/v4.0.2
releases/v4.0.1
releases/v4.0.0
releases/v3.2.2
releases/v3.2.1
releases/v3.2.0
releases/v3.1.1
releases/v3.1.0
releases/v3.0.2
releases/v3.0.1
releases/v3.0.0
releases/v2.4.1
releases/v2.4.0
releases/v2.3.0
releases/v2.2.1
releases/v2.2.0
releases/v2.1.1</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/tornadoweb/tornado/commit/34f5c1cf2696afec5532ca9e870ba32cbc7fee27"><code>34f5c1c</code></a> Version 6.3.2</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/32ad07c54e607839273b4e1819c347f5c8976b2f"><code>32ad07c</code></a> web: Fix an open redirect in StaticFileHandler</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/e0fa53ee96db720dc7800d0248c39a4ffb8911e9"><code>e0fa53e</code></a> Merge pull request <a href="https://redirect.github.com/tornadoweb/tornado/issues/3257">#3257</a> from bdarnell/build-workflow-wstest-warning</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/f5a1d5c7e235ad8860a4c2c5f259a43692bcbaab"><code>f5a1d5c</code></a> ci: Only run pypi actions from the main repo</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/1849ef6c48415ef8f5fecbd47d9f68225588507c"><code>1849ef6</code></a> test: Close a websocket client that causes occasional test failures</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/fcb09eba4bd45c2ebfb6356a38acdb3b4450c0d8"><code>fcb09eb</code></a> Merge pull request <a href="https://redirect.github.com/tornadoweb/tornado/issues/3256">#3256</a> from bdarnell/build-workflow-qemu</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/c3d50f41a29cda5f76031c60cf7902b175b79479"><code>c3d50f4</code></a> ci: Update setup-qemu-action version</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/419838b9bcc51445241630def0478f1fbaa61b4b"><code>419838b</code></a> Merge pull request <a href="https://redirect.github.com/tornadoweb/tornado/issues/3255">#3255</a> from bdarnell/bump-version-6.3.1</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/cd5b9fcf4ac16c3f5480b3d8ae81b4103c0e7549"><code>cd5b9fc</code></a> Bump version to 6.3.1</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/245334401570a40ba01813d9adb14976c50d77dd"><code>2453344</code></a> Merge pull request <a href="https://redirect.github.com/tornadoweb/tornado/issues/3254">#3254</a> from bdarnell/fix-set-cookie-case</li>
<li>Additional commits viewable in <a href="https://github.com/tornadoweb/tornado/compare/v6.0.4...v6.3.2">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23767/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23767/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23767",
"html_url": "https://github.com/huggingface/transformers/pull/23767",
"diff_url": "https://github.com/huggingface/transformers/pull/23767.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23767.patch",
"merged_at": 1685045773000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23766
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23766/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23766/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23766/events
|
https://github.com/huggingface/transformers/pull/23766
| 1,726,240,402 |
PR_kwDOCUB6oc5RXf_r
| 23,766 |
Bump tornado from 6.0.4 to 6.3.2 in /examples/research_projects/lxmert
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,685 | 1,685 | 1,685 |
CONTRIBUTOR
| null |
Bumps [tornado](https://github.com/tornadoweb/tornado) from 6.0.4 to 6.3.2.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/tornadoweb/tornado/blob/master/docs/releases.rst">tornado's changelog</a>.</em></p>
<blockquote>
<h1>Release notes</h1>
<p>.. toctree::
:maxdepth: 2</p>
<p>releases/v6.3.2
releases/v6.3.1
releases/v6.3.0
releases/v6.2.0
releases/v6.1.0
releases/v6.0.4
releases/v6.0.3
releases/v6.0.2
releases/v6.0.1
releases/v6.0.0
releases/v5.1.1
releases/v5.1.0
releases/v5.0.2
releases/v5.0.1
releases/v5.0.0
releases/v4.5.3
releases/v4.5.2
releases/v4.5.1
releases/v4.5.0
releases/v4.4.3
releases/v4.4.2
releases/v4.4.1
releases/v4.4.0
releases/v4.3.0
releases/v4.2.1
releases/v4.2.0
releases/v4.1.0
releases/v4.0.2
releases/v4.0.1
releases/v4.0.0
releases/v3.2.2
releases/v3.2.1
releases/v3.2.0
releases/v3.1.1
releases/v3.1.0
releases/v3.0.2
releases/v3.0.1
releases/v3.0.0
releases/v2.4.1
releases/v2.4.0
releases/v2.3.0
releases/v2.2.1
releases/v2.2.0
releases/v2.1.1</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/tornadoweb/tornado/commit/34f5c1cf2696afec5532ca9e870ba32cbc7fee27"><code>34f5c1c</code></a> Version 6.3.2</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/32ad07c54e607839273b4e1819c347f5c8976b2f"><code>32ad07c</code></a> web: Fix an open redirect in StaticFileHandler</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/e0fa53ee96db720dc7800d0248c39a4ffb8911e9"><code>e0fa53e</code></a> Merge pull request <a href="https://redirect.github.com/tornadoweb/tornado/issues/3257">#3257</a> from bdarnell/build-workflow-wstest-warning</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/f5a1d5c7e235ad8860a4c2c5f259a43692bcbaab"><code>f5a1d5c</code></a> ci: Only run pypi actions from the main repo</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/1849ef6c48415ef8f5fecbd47d9f68225588507c"><code>1849ef6</code></a> test: Close a websocket client that causes occasional test failures</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/fcb09eba4bd45c2ebfb6356a38acdb3b4450c0d8"><code>fcb09eb</code></a> Merge pull request <a href="https://redirect.github.com/tornadoweb/tornado/issues/3256">#3256</a> from bdarnell/build-workflow-qemu</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/c3d50f41a29cda5f76031c60cf7902b175b79479"><code>c3d50f4</code></a> ci: Update setup-qemu-action version</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/419838b9bcc51445241630def0478f1fbaa61b4b"><code>419838b</code></a> Merge pull request <a href="https://redirect.github.com/tornadoweb/tornado/issues/3255">#3255</a> from bdarnell/bump-version-6.3.1</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/cd5b9fcf4ac16c3f5480b3d8ae81b4103c0e7549"><code>cd5b9fc</code></a> Bump version to 6.3.1</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/245334401570a40ba01813d9adb14976c50d77dd"><code>2453344</code></a> Merge pull request <a href="https://redirect.github.com/tornadoweb/tornado/issues/3254">#3254</a> from bdarnell/fix-set-cookie-case</li>
<li>Additional commits viewable in <a href="https://github.com/tornadoweb/tornado/compare/v6.0.4...v6.3.2">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23766/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23766/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23766",
"html_url": "https://github.com/huggingface/transformers/pull/23766",
"diff_url": "https://github.com/huggingface/transformers/pull/23766.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23766.patch",
"merged_at": 1685045761000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23765
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23765/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23765/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23765/events
|
https://github.com/huggingface/transformers/issues/23765
| 1,726,224,596 |
I_kwDOCUB6oc5m5BjU
| 23,765 |
Multiple different models returning only `<unk>` tokens in text generation
|
{
"login": "serenalotreck",
"id": 41377532,
"node_id": "MDQ6VXNlcjQxMzc3NTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/41377532?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/serenalotreck",
"html_url": "https://github.com/serenalotreck",
"followers_url": "https://api.github.com/users/serenalotreck/followers",
"following_url": "https://api.github.com/users/serenalotreck/following{/other_user}",
"gists_url": "https://api.github.com/users/serenalotreck/gists{/gist_id}",
"starred_url": "https://api.github.com/users/serenalotreck/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/serenalotreck/subscriptions",
"organizations_url": "https://api.github.com/users/serenalotreck/orgs",
"repos_url": "https://api.github.com/users/serenalotreck/repos",
"events_url": "https://api.github.com/users/serenalotreck/events{/privacy}",
"received_events_url": "https://api.github.com/users/serenalotreck/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Same issue here. From that kernel revision it seems like @serenalotreck is running on CentOS 7, possibly within an HPC setup. That's the case with me as well. I've tried several methods of loading / merging but not getting anything from the output apart from `<unk>` tokens for quantized models. For reference, all concerned libraries are at their latest versions, including a few attempts with git versions.\r\n\r\nI've requested our sysadmins to update the NVIDIA drivers, but other than that, I'm not sure what to do next.",
"@akhilvaid that's correct about my kernel revision -- I'm so glad to finally have some answer about what's going on, even if it's a frustrating one!\r\n\r\nDo you think rolling back versions could possibly help? I don't have a good sense of how recently the features I'm using were added, so I'm not sure if I'd be able to do that from a, still being able to run the code I have, standpoint, but maybe this was a bug that was recently introduced if all the newest versions don't work?",
"Also want to tag @gante since this is related to `generate`",
"Hey! Thanks for reporting this. \r\nI am not entirely sure about what might be going on here, I would suggest to try running a smaller model without `load_in_8bits` and check if the issue persists. If not, then it might be related to `generate` otherwise, it can be a problem with the instabilities",
"@akhilvaid just wanted to update you that I got the suggestion from someone on the HPC staff to try running CentOS 9 in a Singularity container, so I'm spending some time trying that today in hopes that it works!\r\n\r\n@ArthurZucker How small of a model is small? 😄 ",
"@serenalotreck Unless I'm missing something, containers only really have additional / new libraries installed. That said, I tried the same thing using one of the NGC docker images repurposed into a singularity container with v515 drivers - but the error is persisting.\r\n\r\n@ArthurZucker I can successfully use/generate responses with a 13B parameter Vicuna in 16bit on an A100 80G. 33B or greater don't fit into a single GPU's memory - and quantization leads to the same issues as earlier. GPU memory and compute utilization jumps - but only `<unk>` tokens are generated.",
"@ArthurZucker @akhilvaid I found the same thing -- if I could fit the non-quantized model into memory then it was fine, it's definitely related to the quantization process. However, I only have access to GPUs with 32768MB (~33GB) memory, so I'm even more limited in what I can do without being able to quantize the models.\r\n\r\nDo you have any specific suggestions for what to do to try and get around this issue?",
"Hey @serenalotreck @akhilvaid 👋 \r\n\r\nThis is not a fix per se, as I don't have a similar setup and can't reproduce the issue. Using transformers 4.30, bitsandbytes 0.39.0, pytorch 2.0.0, and **4 bit quantization** on a single RTX 3090, I get\r\n\r\n```\r\n### Response:\r\n\r\n(\"Salmeterol\", \"is a long-acting beta2-adrenergic receptor (beta 2AR) agonist\", \"used clinically to treat asthma\")</s>\r\n```\r\n\r\nThis means that the error is likely related to 8 bit quantization or to your setup. Using 4 bit quantization may solve the issue 🙌 \r\n\r\nLet us know about further developments on your end :)\r\n\r\n______________________________________________\r\n\r\n```py\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\n\r\n# Load model and tokenizer\r\ncheckpoint = 'digitous/Alpacino30b'\r\nmodel = AutoModelForCausalLM.from_pretrained(checkpoint, device_map='auto', load_in_4bit=True)\r\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\r\n\r\n# Build prompt\r\nprompt = \"\"\"\r\nBelow is an instruction that describes a task. Write a response that appropriately completes the request.\r\n\r\n### Instruction:\r\nExtract the biological relations from the following text as (Subject, Predicate, Object) triples in the format (\"Subject\", \"predicate\", \"Object\"):\r\n\r\n### Input:\r\nSalmeterol is a long-acting beta2-adrenergic receptor (beta 2AR) agonist used clinically to treat asthma.\r\n\r\n### Response:\r\n\r\n\"\"\"\r\n\r\n# Generate predictions\r\ninputs = tokenizer(prompt, return_tensors='pt')\r\ninputs = inputs.to(0)\r\noutput = model.generate(inputs['input_ids'], max_new_tokens=500)\r\nresponse = tokenizer.decode(output[0].tolist())\r\n\r\nprint(response)\r\n```",
"@gante thank you!\r\n\r\n@akhilvaid Curious to know what happens when you run this code.\r\n\r\nSomething whacky is happening on my end -- the code aborts at trying to load the model (after successfully downloading the shards). When I had `load_in_4bit=True`, it didn't print anything, and when I removed `load_in_4bit=True`, it printed out half of a message:\r\n\r\n```\r\nlerate` to properly deal with them (`pip install --upgrade acc .cuda \r\n```\r\nI'm working in a conda environment so I ran `conda upgrade accelerate` to see if that would help, accelerate was successfully upgraded, but I still got the same weird half-message.\r\n\r\nWhen I change the model to `ausboss/llama-30b-supercot` and include `load_with_4bit`, I get a different part-message:\r\n```\r\nd(init_empty_weights()) \r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,685 | 1,689 | 1,689 |
NONE
| null |
### System Info
- `transformers` version: 4.29.2
- Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.11.3
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I've tested the following minimal reproducible example for three models: `ausboss/llama-30b-supercot`, `digitous/Alpacino30b`, and `MetaIX/GPT4-X-Alpasta-30b`.
I originally thought that this issue was related to my prompts not being in the correct format for a given model, as referenced in the comments of #23411. However, I've identified the correct prompt formatting for `ausboss/llama-30b-supercot` and `digitous/Alpacino30b` from their model cards. Additionally, while the prompt format is not explicitly stated in the model card for `MetaIX/GT4-X-Alpasta-30b`, it is also based off the alpaca model, so I would expect the same prompt formatting to work as well.
The example (the only thing that I changed for each model was what string was the `checkpoint` variable):
```
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load model and tokenizer
checkpoint = 'digitous/Alpacino30b'
model = AutoModelForCausalLM.from_pretrained(checkpoint,
torch_dtype=torch.float16, device_map='auto', load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# Build prompt
prompt = """
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Extract the biological relations from the following text as (Subject, Predicate, Object) triples in the format ("Subject", "predicate", "Object"):
### Input:
Salmeterol is a long-acting beta2-adrenergic receptor (beta 2AR) agonist used clinically to treat asthma.
### Response:
"""
# Generate predictions
inputs = tokenizer(prompt, return_tensors='pt')
inputs = inputs.to(0)
output = model.generate(inputs['input_ids'], max_new_tokens=500)
response = tokenizer.decode(output[0].tolist())
print(response)
```
The output of this script for all three models gives an identical response:
```
<s>
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Extract the biological relations from the following text as (Subject, Predicate, Object) triples in the format
("Subject", "predicate", "Object"):
### Input:
Salmeterol is a long-acting beta2-adrenergic receptor (beta 2AR) agonist used clinically to treat asthma.
### Response:
<unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk>
```
I'm immediately suspicious that I'm getting an identical output from multiple models. I can't load these models without `load_in_8bit`, so I can't check whether or not this is related to the quantization of the models. I also tried running this code with a shorter prompt that contained less instruction, in case it was too complex: "Extract the biological relations from the following text:". However, I once again get an identical output.
### Expected behavior
A response that contains normal tokens, and varies between different models at least somewhat.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23765/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23765/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23764
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23764/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23764/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23764/events
|
https://github.com/huggingface/transformers/issues/23764
| 1,726,217,746 |
I_kwDOCUB6oc5m4_4S
| 23,764 |
Whisper `get_prompt_ids` throws error when used with a 'FastTokenizer'
|
{
"login": "connor-henderson",
"id": 78612354,
"node_id": "MDQ6VXNlcjc4NjEyMzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/78612354?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/connor-henderson",
"html_url": "https://github.com/connor-henderson",
"followers_url": "https://api.github.com/users/connor-henderson/followers",
"following_url": "https://api.github.com/users/connor-henderson/following{/other_user}",
"gists_url": "https://api.github.com/users/connor-henderson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/connor-henderson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/connor-henderson/subscriptions",
"organizations_url": "https://api.github.com/users/connor-henderson/orgs",
"repos_url": "https://api.github.com/users/connor-henderson/repos",
"events_url": "https://api.github.com/users/connor-henderson/events{/privacy}",
"received_events_url": "https://api.github.com/users/connor-henderson/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] |
closed
| false | null |
[] |
[
"Related issue #17391 mentions that `add_prefix_space` can only be specified for fast tokenizers upon init, so it seems like just the manual `\" \" + text` replacement for this param would be the appropriate fix.",
"Hey! Thanks for reporting. Indeed I think you can easily fix this for a single model (in the fast tokenizer you could allow the argument to flow), but I do agreed that it is not really expected that the API between fast and slow would be different on that. "
] | 1,685 | 1,685 | 1,685 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.30.0.dev0
- Platform: macOS-13.0-arm64-arm-64bit
- Python version: 3.9.16
- Huggingface_hub version: 0.12.0
- Safetensors version: 0.2.8
- PyTorch version (GPU?): 1.13.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.5.3 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@sanchit-gandhi @hollance
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```py
from transformers import WhisperTokenizerFast, WhisperTokenizer, GPT2Tokenizer, GPT2TokenizerFast
slow_tokenizer = WhisperTokenizer.from_pretrained('openai/whisper-tiny')
prompt_ids = slow_tokenizer.get_prompt_ids("Hello, world!", return_tensors="pt")
print('Whisper slow tokenizer succeeded')
try:
fast_tokenizer = WhisperTokenizerFast.from_pretrained('openai/whisper-tiny')
prompt_ids = fast_tokenizer.get_prompt_ids("Hello, world!", return_tensors="pt")
except Exception as e:
print('Whisper fast tokenizer failed - ', e)
# Alternatively, this slow-fast param difference can be seen when tokenizing with a
# pipeline or any model that has a slow tokenizer `prepare_for_tokenization` method
# that checks `add_prefix_space` (GPT2 is old but there are ~20 models this applies to)
tokenizer = GPT2Tokenizer.from_pretrained('gpt2', use_fast=False)
prompt_ids = tokenizer("Hello, world!", add_prefix_space=True)["input_ids"]
print('GPT2 slow tokenizer succeeded')
try:
tokenizer = GPT2TokenizerFast.from_pretrained('gpt2')
prompt_ids = tokenizer("Hello, world!", add_prefix_space=True)["input_ids"]
except Exception as e:
print('Whisper fast tokenizer failed - ', e)
```
### Expected behavior
Are the slow and fast tokenizers supposed to have the same arg options for tokenizing text? They diverge with the `add_prefix_space` argument; while the slow tokenizer accepts and applies it with the [prepare_for_tokenization](https://github.com/huggingface/transformers/blob/3416bba7c70c358ac17efd3be31e9090135969ab/src/transformers/tokenization_utils.py#L502) method that same model's fast tokenizer does not and throws an error. Given that this arg difference appears to be present across all models where `add_prefix_space` can be provided to the slow tokenizer (at a glance appears to be ~20) I'd imagine the answer is no, the arg options aren't supposed to be 1:1.
The fix for the Whisper tokenizer `get_prompt_ids` method is straightforward as we can just do `" " + text` directly in the method instead of `add_prefix_space=True`, but I wanted to bring up the above in case that argument is actually supposed to compatible across both slow and fast tokenizers in which case we would also want to address that.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23764/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23764/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23763
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23763/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23763/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23763/events
|
https://github.com/huggingface/transformers/issues/23763
| 1,726,194,237 |
I_kwDOCUB6oc5m46I9
| 23,763 |
Trainer do model generation during evaluation loop
|
{
"login": "szxiangjn",
"id": 41177966,
"node_id": "MDQ6VXNlcjQxMTc3OTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/41177966?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/szxiangjn",
"html_url": "https://github.com/szxiangjn",
"followers_url": "https://api.github.com/users/szxiangjn/followers",
"following_url": "https://api.github.com/users/szxiangjn/following{/other_user}",
"gists_url": "https://api.github.com/users/szxiangjn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/szxiangjn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/szxiangjn/subscriptions",
"organizations_url": "https://api.github.com/users/szxiangjn/orgs",
"repos_url": "https://api.github.com/users/szxiangjn/repos",
"events_url": "https://api.github.com/users/szxiangjn/events{/privacy}",
"received_events_url": "https://api.github.com/users/szxiangjn/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"You can write your own subclass of the Trainer, it's not supported and we don't plan on adding it.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,685 | 1,688 | 1,688 |
NONE
| null |
### Feature request
Current trainer only supports teacher-forcing generation for computing evaluation loss but not auto-regressive generation for other metrics. Seq2SeqTrainer supports this but seems that it only accepts encoder-decoder models like T5 instead of GPT-style (decoder-only) models. Would this feature be added in the future?
### Motivation
I am training a decoder-only model and want to use model.generate to evaluate it during training.
### Your contribution
I haven't investigated deeply into Trainer code.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23763/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23763/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23762
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23762/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23762/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23762/events
|
https://github.com/huggingface/transformers/issues/23762
| 1,726,119,962 |
I_kwDOCUB6oc5m4oAa
| 23,762 |
Trainer.train() initializing train multiple times for no apparent reason and doubling total optimization steps with LoRA
|
{
"login": "dechantoine",
"id": 56443779,
"node_id": "MDQ6VXNlcjU2NDQzNzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/56443779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dechantoine",
"html_url": "https://github.com/dechantoine",
"followers_url": "https://api.github.com/users/dechantoine/followers",
"following_url": "https://api.github.com/users/dechantoine/following{/other_user}",
"gists_url": "https://api.github.com/users/dechantoine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dechantoine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dechantoine/subscriptions",
"organizations_url": "https://api.github.com/users/dechantoine/orgs",
"repos_url": "https://api.github.com/users/dechantoine/repos",
"events_url": "https://api.github.com/users/dechantoine/events{/privacy}",
"received_events_url": "https://api.github.com/users/dechantoine/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] |
[
"How are you launching your training?\r\nAlso cc @younesbelkada since peft is involved.",
"The example I provided has been run on a Google Colab with GPU for reproducibility. I also had the same issue on Jupyterlab notebooks.",
"Looks like the misleading behavior comes from the Training argument `auto_find_batch_size`.\r\nWhen ran with `per_device_batch_size=8`, my script throws an OOM error but with `per_device_batch_size=4` everything works like charm. So the last log is accurate since batch_size has been cut in half. I also found the same behavior on my private script where logs loop 4 times.\r\n\r\nhttps://github.com/huggingface/transformers/blob/f67dac97bdc63874f2288546b3fa87e69d2ea1c8/src/transformers/trainer.py#L1693\r\n\r\nI assume at some point the `args.per_device_train_batch_size` is not updated, hence the discrepancy in logs.\r\n\r\nEdit : I take a look at accelerate.utils.find_executable_batch_size and I think the reason why the log are wrong is simply because in` _inner_training_loop` https://github.com/huggingface/transformers/blob/f67dac97bdc63874f2288546b3fa87e69d2ea1c8/src/transformers/trainer.py#L1703 `args.train_batch_size` is used but neither updated. Logs should use `self._train_batch_size`",
"cc @muellerzr then :-)",
"Thanks! https://github.com/huggingface/transformers/pull/23800 will solve this :) ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Closing as this as it seems resolved !"
] | 1,685 | 1,687 | 1,687 |
NONE
| null |
### System Info
- `transformers` version: 4.29.2
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu)
- Jax version: 0.4.10
- JaxLib version: 0.4.10
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
+ accelerate-0.19.0-py3-none-any.whl
+ datasets-2.12.0-py3-none-any.whl
+ peft-0.3.0-py3-none-any.whl
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForCausalLM, Trainer, TrainingArguments, DataCollatorForLanguageModeling
from peft import get_peft_model, LoraConfig, TaskType
model_name_or_path = "asi/gpt-fr-cased-small"
def preprocess_function(examples):
return tokenizer(text=examples["review"],
truncation=True,
padding="max_length",
max_length=tokenizer.max_model_input_sizes["gpt2"])
trainset = load_dataset("allocine", split="train").remove_columns("label").select(range(900))
testset = load_dataset("allocine", split="test").remove_columns("label").select(range(900,1000))
tokenizer_name_or_path = "asi/gpt-fr-cased-small"
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name_or_path)
tokenizer.model_max_length = tokenizer.max_model_input_sizes["gpt2"]
if tokenizer.pad_token_id is None:
tokenizer.pad_token_id = tokenizer.eos_token_id
trainset = trainset.map(preprocess_function,
remove_columns=trainset.features.keys(),
num_proc=32)
testset = testset.map(preprocess_function,
remove_columns=testset.features.keys(),
num_proc=32)
peft_config = LoraConfig(
task_type=TaskType.CAUSAL_LM,
inference_mode=False,
r=12,
lora_alpha=32,
lora_dropout=0.15,
fan_in_fan_out=True,
)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path)
lora_model = get_peft_model(model, peft_config)
trainer = Trainer(
model=lora_model,
train_dataset=trainset,
eval_dataset=testset,
data_collator=DataCollatorForLanguageModeling(tokenizer, mlm=False),
args=TrainingArguments(
auto_find_batch_size = True,
fp16=True,
num_train_epochs = 2,
learning_rate = 2e-5,
optim = "adamw_torch",
evaluation_strategy = "steps",
eval_delay = 0,
eval_steps = 10,
eval_accumulation_steps = 1,
logging_strategy = "steps",
logging_first_step = True,
logging_steps=10,
log_level = "info",
save_strategy = "steps",
save_steps = 100,
save_total_limit = 10,
output_dir='outputs',
),
)
trainer.train()
```
### Expected behavior
Hello ! The first logs from trainer seems accurate to me (`Total optimization steps = Num Epochs * Num examples//Total train batch size`) but right after, trainer doubles the total optimization steps for no reason. I also encountered a case where it doubled 4 times !
```
***** Running training *****
Num examples = 900
Num Epochs = 2
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 8
Gradient Accumulation steps = 1
Total optimization steps = 226
Number of trainable parameters = 442,368
You're using a GPT2TokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
***** Running training *****
Num examples = 900
Num Epochs = 2
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 8
Gradient Accumulation steps = 1
Total optimization steps = 450
Number of trainable parameters = 442,368
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23762/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23762/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23761
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23761/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23761/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23761/events
|
https://github.com/huggingface/transformers/issues/23761
| 1,726,103,257 |
I_kwDOCUB6oc5m4j7Z
| 23,761 |
My QUESTION is how run a very big model like bloom on a cluster of machines ?
|
{
"login": "patnelt",
"id": 26058548,
"node_id": "MDQ6VXNlcjI2MDU4NTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/26058548?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patnelt",
"html_url": "https://github.com/patnelt",
"followers_url": "https://api.github.com/users/patnelt/followers",
"following_url": "https://api.github.com/users/patnelt/following{/other_user}",
"gists_url": "https://api.github.com/users/patnelt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patnelt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patnelt/subscriptions",
"organizations_url": "https://api.github.com/users/patnelt/orgs",
"repos_url": "https://api.github.com/users/patnelt/repos",
"events_url": "https://api.github.com/users/patnelt/events{/privacy}",
"received_events_url": "https://api.github.com/users/patnelt/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) for such questions.",
"yes thanks for your answer. i wrote it also on forum but it not an easy question and only specialists like you can answer or give me a give me some help so i can continue ... Regards pat",
"so could you give some technical help, regards, pat",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,685 | 1,688 | 1,688 |
NONE
| null |
### System Info
bloom, pytorch, ubuntu
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
..
### Expected behavior
Hello i can run opt 66b on one server with 6 gpu 24 Gb by using you page on huggingface on how load big models : I give device_map. I can also run bloom on one server with 8 GPUs 24 GB by giving device_map but it uses offload on CPU and it takes time to answer. My QUESTION is how run a very big model like bloom on a cluster of machines indeed bloom would need 20 GPus 24 Gb and it needs a cluster of 3 machines with 8 gpus to deploy, with accelerate it is not possible as we are limited to only one machine. I have tried everything like to use RPC Framework but it seems it is only for CPU. Thanks for your help. Regards Pat
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23761/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23761/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23760
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23760/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23760/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23760/events
|
https://github.com/huggingface/transformers/pull/23760
| 1,726,010,910 |
PR_kwDOCUB6oc5RWuiW
| 23,760 |
Move TF building to an actual build() method
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This should be ready to review now! Some tests failing, but that looks like Hub connection issues",
"Actually, I should explain my reasoning for some of the changes here - you're probably right that I can improve the API, though!\r\n\r\nFirstly, the removal of `tf.cond` is actually not a necessary part of this PR anymore, but it is good practice (Longformer and LED are the only two models in all of Transformers that use it in their modelling code). The reason is because of the Keras call stack. In the `__call__` method for any TF module, Keras appends that layer to the call stack, and enters that layer's namespace. This means that if you have `self.bert` and that calls `self.encoder` and that calls `self.attn`, Keras will be in the `bert/encoder/attn` namespace. \r\n\r\nIncredibly, though, `tf.cond` counts as a layer with its own namespace, but **only when the tf.cond is not being eagerly evaluated**. In my initial PR, I was trying to replace our dummies with symbolic TF tensors, which meant the `tf.cond` was not evaluated at compile time, but instead had to be compiled as a conditional in the model graph. The result is that all layer weights inside the conditional got encapsulated in a `/cond.1/` namespace. This broke compatibility with existing checkpoints. \r\n\r\nRemoving `tf.cond` helped, but to be safe I added a manual build to those layers to directly control the weight naming regardless of what the call stack thought it should be. As a result, I could probably revert the `tf.cond` calls, but I think it's preferable if we don't, and just try to keep it out of modelling code and just use `if` statements instead (which TF can compile into graph conditionals if it can't resolve the branch to be chosen at compile time). `tf.cond` is fine in generation code where no weight names are created.\r\n\r\nSecondly, the distinction between `build()` and `build_with_dummies()` is a bit of an ugly hack - I think I could probably remove `build_with_dummies()` entirely, but there was a piece of the TF-PT crossloading code that only worked if it could build the model with specific inputs of its choice. I added `build_with_dummies()` to support that, with a separate `built_with_dummies` flag to make sure that any repeated calls wouldn't waste more time. However, it would probably make more sense to just manually pass the inputs through the model in those particular crossloading functions and delete the method and the flag. WDYT?",
"> tf.cond counts as a layer with its own namespace, but only when the tf.cond is not being eagerly evaluated.\r\n\r\n😑 \r\n\r\nIn this case, let's rid ourselves of this pseudolayer! I'm pro the if/else changes :) \r\n\r\n> it would probably make more sense to just manually pass the inputs through the model in those particular crossloading functions and delete the method and the flag. WDYT?\r\n\r\nYep, that's what I would go for. Would it be possible to still have some of the logic to exit early if already built? Or would this be to tricky to handle to be worth it? ",
"I think we could, but it's probably not necessary - the only cases where we build the model with specific inputs are in weird PT-TF crossloading functions, which should always be called during or near model init anyway, so I think it's fine if there's a risk of a little bit of duplicated work there to save on overall code complexity.",
"@amyeroberts Done! `build_with_dummies` is no more",
"Also, this PR looks ready but I'm going to let it sit for a couple of days to make sure the CI is working again after my last library-breaking PR, then merge it.",
"Change of plans: The CI is working except for OOM errors during building for some of the pipelines, and since this cleans up building a bit we're going to merge this one too and see if it helps. If it doesn't, I'll open a new PR to see if I can lower the memory usage in the affected models."
] | 1,685 | 1,686 | 1,686 |
MEMBER
| null |
This has been a longstanding dream of mine: To move all TF model building into a proper `build()` method, using symbolic tensors instead of actual dummies. This would allow us to, among other things, stop our very hacky overriding of `save_spec`, as well as allowing us to build our TF models with zero device flops (although the speedup may be system-dependent, as we do have some compile time with this approach). It would make our models much closer to the Keras standard, which would stop Chollet casting curses upon me from afar.
In the past, we've run into serious problems with tensor names moving around when we tried this - I think I've figured out why, though, and I have a couple of ideas to resolve that without lots of hacky edge-case code.
This is an extremely draft PR that will break everything until I finish testing it properly!
**Update:** Using symbolic tensors is much slower - it works in most cases, but increases the time it takes for our tests to run by a factor of ~4, which is probably not acceptable. Instead, I'm going to rework this PR to move to a standard build() method using actual dummies. With some optimizations, I believe we can make this work, while still preserving most of the benefits of this PR, including not repeating the build unnecessarily and adding the ability to override `build()` to speed up our slowest models
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23760/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23760/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23760",
"html_url": "https://github.com/huggingface/transformers/pull/23760",
"diff_url": "https://github.com/huggingface/transformers/pull/23760.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23760.patch",
"merged_at": 1686072653000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23759
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23759/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23759/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23759/events
|
https://github.com/huggingface/transformers/pull/23759
| 1,725,991,842 |
PR_kwDOCUB6oc5RWqXx
| 23,759 |
Adds a FlyteCallback
|
{
"login": "peridotml",
"id": 106936600,
"node_id": "U_kgDOBl-5GA",
"avatar_url": "https://avatars.githubusercontent.com/u/106936600?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peridotml",
"html_url": "https://github.com/peridotml",
"followers_url": "https://api.github.com/users/peridotml/followers",
"following_url": "https://api.github.com/users/peridotml/following{/other_user}",
"gists_url": "https://api.github.com/users/peridotml/gists{/gist_id}",
"starred_url": "https://api.github.com/users/peridotml/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peridotml/subscriptions",
"organizations_url": "https://api.github.com/users/peridotml/orgs",
"repos_url": "https://api.github.com/users/peridotml/repos",
"events_url": "https://api.github.com/users/peridotml/events{/privacy}",
"received_events_url": "https://api.github.com/users/peridotml/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Really thrilled to have this opportunity to contribute! 🎉\r\n\r\nBefore I transition this PR out of draft mode, I want to ensure everything on the Flyte side is spot on.\r\n\r\n- I'm working on linking to a live example on Flyte\r\n- I've also reached out to @cosmicBboy, @kumare3, @zeryx on the Flyte team - who might have some comments. I know they were excited about this integration 😄. ",
"@sgugger we should be good to go now! I responded to the Flyte team and updated the docs",
"@sgugger should be good! 🤞 "
] | 1,685 | 1,685 | 1,685 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds a Flyte callback that integrates with Flyte's [intra-task checkpoints](https://docs.flyte.org/projects/cookbook/en/stable/auto/core/control_flow/checkpoint.html#why-intra-task-checkpoints) and [Flyte Decks](https://docs.flyte.org/projects/cookbook/en/latest/auto/core/flyte_basics/deck.html).
I raised this issue in order to get approval for this PR #23476
I am using this [example](https://gist.github.com/peridotml/68f376f0f4fd1926fb0746daaeea09f8) to test on a flyte cluster. It uses Flyte's checkpointing system to restart from a hugging face checkpoint (see screenshots).
<img width="400" alt="Screenshot 2023-05-26 at 2 57 59 PM" src="https://github.com/huggingface/transformers/assets/106936600/5cf83157-cce0-4a2e-8a2f-cd1a72c65820">
<img width="400" alt="Screenshot 2023-05-26 at 2 58 14 PM" src="https://github.com/huggingface/transformers/assets/106936600/891d86e7-5885-4851-889f-e912d42f2902">
Once this is merged, I will add this and more to Flyte's documentation.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23759/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23759/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23759",
"html_url": "https://github.com/huggingface/transformers/pull/23759",
"diff_url": "https://github.com/huggingface/transformers/pull/23759.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23759.patch",
"merged_at": 1685455688000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23758
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23758/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23758/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23758/events
|
https://github.com/huggingface/transformers/pull/23758
| 1,725,975,283 |
PR_kwDOCUB6oc5RWmxC
| 23,758 |
[`Nllb-Moe`] Fix nllb moe accelerate issue
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,685 | 1,685 | 1,685 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes: https://github.com/huggingface/transformers/issues/23385
Before this PR, it seemed that `_no_split_modules` was not properly set. Due to the skip connections in `NllbMoeEncoderLayer` and `NllbMoeDecoderLayer` one needs to add these modules inside `_no_split_modules` instead of `NllbMoeAttention`.
All accelerate tests pass
cc @ArthurZucker @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23758/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23758/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23758",
"html_url": "https://github.com/huggingface/transformers/pull/23758",
"diff_url": "https://github.com/huggingface/transformers/pull/23758.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23758.patch",
"merged_at": 1685047053000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23754
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23754/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23754/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23754/events
|
https://github.com/huggingface/transformers/issues/23754
| 1,725,837,249 |
I_kwDOCUB6oc5m3i_B
| 23,754 |
When I use Bloom, I get a error which is Caught RuntimeError in replica 0 on device 0.
|
{
"login": "han508",
"id": 69674181,
"node_id": "MDQ6VXNlcjY5Njc0MTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/69674181?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/han508",
"html_url": "https://github.com/han508",
"followers_url": "https://api.github.com/users/han508/followers",
"following_url": "https://api.github.com/users/han508/following{/other_user}",
"gists_url": "https://api.github.com/users/han508/gists{/gist_id}",
"starred_url": "https://api.github.com/users/han508/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/han508/subscriptions",
"organizations_url": "https://api.github.com/users/han508/orgs",
"repos_url": "https://api.github.com/users/han508/repos",
"events_url": "https://api.github.com/users/han508/events{/privacy}",
"received_events_url": "https://api.github.com/users/han508/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I don't think `device_map=\"auto\"` is compatible with gradient checkpointing.",
"Thanks for the response. I've cancelled the gradient checkpointing, but the problem still exists",
"Since we don't have access to your data files, it's going to be pretty hard to reproduce the issue. Could you:\r\n1. format your code so we can copy/paste it\r\n2. use a dataset from the Hub instead so we can replicate\r\nThanks!",
"Thank you again for your patient reply. I have modified the data set on Hub and formatted the code. The code is as follows.\r\n```python\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\nimport torch\r\nfrom datasets import load_dataset\r\n# tokenizer = AutoTokenizer.from_pretrained(\"Bigscience/bloom-560m\",cache_dir='./cache/')\r\ntokenizer = AutoTokenizer.from_pretrained(\"EleutherAI/pythia-70m\",cache_dir='./cache/')\r\ntokenizer.add_special_tokens({'pad_token': '[PAD]'})\r\ntokenizer.padding_side='right'\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\"EleutherAI/pythia-70m\",device_map='balanced',cache_dir='./cache/',torch_dtype=torch.float16)\r\n# model = AutoModelForCausalLM.from_pretrained(\"Bigscience/bloom-560m\",device_map='balanced',cache_dir='./cache/',torch_dtype=torch.float16)\r\nmodel.resize_token_embeddings(len(tokenizer))\r\nmodel.config.use_cache = True\r\n\r\ndataset = load_dataset(\"c-s-ale/alpaca-gpt4-data-zh\")\r\n\r\ndef preprocess_function(sample):\r\n for i in range(len(sample['instruction'])):\r\n sample['instruction'][i]=sample['instruction'][i]+'[PAD]'+sample['input'][i]\r\n output = ['<bot>:'+i for i in sample['output']]\r\n model_inputs = tokenizer(sample['instruction'], truncation=True,padding=True,max_length=100,return_tensors=\"pt\")\r\n labels = tokenizer(output, truncation=True, padding=True,max_length=100,return_tensors=\"pt\")\r\n model_inputs[\"labels\"] = labels[\"input_ids\"]\r\n return model_inputs\r\n\r\ninput_data = dataset['train'].map(preprocess_function,batched=True,remove_columns=['instruction','input','output'])\r\n\r\n\r\n\r\nfrom transformers import TrainingArguments, Trainer, DataCollatorForLanguageModeling\r\ntrainArgs = TrainingArguments(\r\n output_dir= './ckps_bloom',\r\n do_train=True,\r\n auto_find_batch_size=True,\r\n gradient_accumulation_steps=4,\r\n evaluation_strategy=\"steps\",\r\n save_strategy=\"steps\",\r\n save_steps=10,\r\n eval_steps=10,\r\n logging_steps=10,\r\n warmup_steps=100,\r\n num_train_epochs=2,\r\n learning_rate=2e-5,\r\n fp16=True, \r\n load_best_model_at_end=True,\r\n push_to_hub=False,\r\n report_to=\"wandb\"\r\n)\r\n\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=trainArgs,\r\n train_dataset=input_data,\r\n data_collator=DataCollatorForLanguageModeling(tokenizer, mlm=False),\r\n)\r\ntrainer.train()\r\n```\r\n",
"Thanks for sharing. Note that in any case, you won't be able to train your model in float16 (you will get an error in the line of \"Attempting to unscale FP16 gradients.\". Training in float16 does not converge so the Trainer does not support it. You will need to remove the line `torch_dtype=torch.float16` when loading your model.\r\n\r\nFor the Pythia model, something weird is happening with `device_map=\"auto\"` since the model is so tiny: it is all placed on GPU-1 (in my case) and then the Trainer tries to move it to GPU-0. Will fix this but a simple workaround in the meantime is to `place_model_on_device=False` in your training arguments.",
"Thank you again for your reply. I removed the line torch_dtype =torch.float16 and set place_model_on_device=False, but the problem still exists. This problem existed regardless of the size of the model, and I also tried to use bloom 3b from the Hub.\r\nWhen I removed device_map='auto', the program worked, but only on one GPU.\r\nThe code is as follows.\r\n```python\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\nimport torch\r\nfrom datasets import load_dataset\r\ntokenizer = AutoTokenizer.from_pretrained(\"Bigscience/bloom-3b\",cache_dir='./cache/')\r\n# tokenizer = AutoTokenizer.from_pretrained(\"EleutherAI/pythia-70m\",cache_dir='./cache/')\r\ntokenizer.add_special_tokens({'pad_token': '[PAD]'})\r\ntokenizer.padding_side='right'\r\n\r\n# model = AutoModelForCausalLM.from_pretrained(\"EleutherAI/pythia-70m\",device_map='balanced',cache_dir='./cache/')\r\nmodel = AutoModelForCausalLM.from_pretrained(\"Bigscience/bloom-3b\",device_map='auto',cache_dir='./cache/')\r\nmodel.resize_token_embeddings(len(tokenizer))\r\nmodel.config.use_cache = True\r\n\r\ndataset = load_dataset(\"c-s-ale/alpaca-gpt4-data-zh\")\r\n\r\ndef preprocess_function(sample):\r\n for i in range(len(sample['instruction'])):\r\n sample['instruction'][i]=sample['instruction'][i]+'[PAD]'+sample['input'][i]\r\n output = ['<bot>:'+i for i in sample['output']]\r\n model_inputs = tokenizer(sample['instruction'], truncation=True,padding=True,max_length=100,return_tensors=\"pt\")\r\n labels = tokenizer(output, truncation=True, padding=True,max_length=100,return_tensors=\"pt\")\r\n model_inputs[\"labels\"] = labels[\"input_ids\"]\r\n return model_inputs\r\n\r\ninput_data = dataset['train'].map(preprocess_function,batched=True,remove_columns=['instruction','input','output'])\r\n\r\n\r\n\r\nfrom transformers import TrainingArguments, Trainer, DataCollatorForLanguageModeling\r\ntrainArgs = TrainingArguments(\r\n output_dir= './ckps_bloom',\r\n do_train=True,\r\n auto_find_batch_size=True,\r\n gradient_accumulation_steps=4,\r\n evaluation_strategy=\"steps\",\r\n save_strategy=\"steps\",\r\n save_steps=10,\r\n eval_steps=10,\r\n logging_steps=10,\r\n warmup_steps=100,\r\n num_train_epochs=2,\r\n learning_rate=2e-5,\r\n fp16=True, \r\n load_best_model_at_end=True,\r\n push_to_hub=False,\r\n report_to=\"wandb\",\r\n)\r\nTrainingArguments.place_model_on_device=False\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=trainArgs,\r\n train_dataset=input_data,\r\n data_collator=DataCollatorForLanguageModeling(tokenizer, mlm=False),\r\n)\r\ntrainer.train()\r\n```\r\n\r\nGPU memory allocation is also weird. In the past, it was pretty even, but now it looks like this.\r\n\r\n<img width=\"402\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/69674181/ed1b7c87-cacc-42d5-a327-1b8f3962b0fc\">\r\n\r\nerror:\r\n<img width=\"1207\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/69674181/26494845-b675-499e-961f-af02d15ce264\">\r\n",
"I didn't understand what you meant before. I'm sorry, but it has been solved now.",
"I have the same problem but the solution is still not clear for me, can you specify @han508 ? Thanks,"
] | 1,685 | 1,694 | 1,685 |
NONE
| null |
### System Info
- `transformers` version: 4.28.1
- Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.27
- Python version: 3.9.16
- Huggingface_hub version: 0.13.4
- Safetensors version: not installed
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
- GPU Tesla T4 *4
### Who can help?
@sgugger ,@ArthurZucker
Hello, when I use Bloom model, the following problem occurs, but when I use RedPajamam model or other models, this kind of error does not occur
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
My code:
from transformers import AutoTokenizer, AutoModelForCausalLM
import os
import torch
from datasets import load_dataset
os.environ["CUDA_VISIBLE_DEVICES"] = "0,1,2,3"
tokenizer = AutoTokenizer.from_pretrained("Bigscience/bloom-560m",cache_dir='./cache/')
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
model = AutoModelForCausalLM.from_pretrained("Bigscience/bloom-560m",device_map='auto',cache_dir='./cache/',torch_dtype=torch.float16)
model.resize_token_embeddings(len(tokenizer))
model.gradient_checkpointing_enable()
model.config.use_cache = False
# dataset = load_dataset("BelleGroup/train_1M_CN")
# from datasets import load_dataset
# dataset = load_dataset("json", data_files="./data/alpaca_data_zh_51k.json")
dataset = load_dataset("json", data_files="/home/new_store/Llama/data/alpaca_gpt4_data_zh.json")
dataset = dataset.filter(lambda x: x["output"]!=None)
dataset= dataset.filter(lambda x: x["instruction"] !=None)
dataset= dataset.filter(lambda x: x["input"] !=None)
eval_dataset = load_dataset("json", data_files="split.json")
eval_dataset = eval_dataset.filter(lambda x: x["output"]!=None)
eval_dataset = eval_dataset.filter(lambda x: x["input"] !=None)
eval_dataset = eval_dataset.filter(lambda x: x["instruction"]!=None)
def preprocess_function(sample):
l = "<##human>:"
for i in range(len(sample['instruction'])):
if sample['input'][i]!='':
sample['instruction'][i]=sample['instruction'][i]+'[PAD]'+sample['input'][i]
# print(sample['input'][i])
output = ['<##bot>:'+i for i in sample['output']]
model_inputs = tokenizer(sample['instruction'], truncation=True,padding=True,max_length=256)
labels = tokenizer(output, truncation=True, padding=True,max_length=256)
model_inputs["labels"] = labels["input_ids"]
# print(model_inputs)
return model_inputs
input_data = dataset['train'].map(preprocess_function,batched=True,remove_columns=['instruction','input','output'])
eval_data = eval_dataset['train'].map(preprocess_function,batched=True,remove_columns=['instruction','input','output'])
from transformers import TrainingArguments, Trainer, DataCollatorForLanguageModeling
trainArgs = TrainingArguments(
output_dir= '../ckps_bloom_1M',
do_train=True,
# per_device_train_batch_size=1,
auto_find_batch_size=True,
gradient_accumulation_steps=4,
evaluation_strategy="steps",
save_strategy="steps",
save_steps=500,
eval_steps=500,
logging_steps=100,
warmup_steps=100,
num_train_epochs=2,
learning_rate=2e-5,
#fp16=True,
# bf16=True,
load_best_model_at_end=True,
#deepspeed= './zero.json',
report_to="wandb"
)
trainer = Trainer(
model=model,
args=trainArgs,
train_dataset=input_data,
eval_dataset=eval_data,
data_collator=DataCollatorForLanguageModeling(tokenizer, mlm=False),
)
trainer.train()
Error:
![Uploading image.png…]()
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/home/han/anaconda3/envs/llama/lib/python3.9/site-packages/torch/nn/parallel/parallel_apply.py", line 64,
in _worker
output = module(*input, **kwargs)
File "/home/han/anaconda3/envs/llama/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in
_call_impl
return forward_call(*input, **kwargs)
File "/home/han/anaconda3/envs/llama/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/home/han/anaconda3/envs/llama/lib/python3.9/site-packages/transformers/models/bloom/modeling_bloom.py",
line 913, in forward
transformer_outputs = self.transformer(
File "/home/han/anaconda3/envs/llama/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in
_call_impl
return forward_call(*input, **kwargs)
File "/home/han/anaconda3/envs/llama/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/home/han/anaconda3/envs/llama/lib/python3.9/site-packages/transformers/models/bloom/modeling_bloom.py",
line 730, in forward
inputs_embeds = self.word_embeddings(input_ids)
File "/home/han/anaconda3/envs/llama/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in
_call_impl
return forward_call(*input, **kwargs)
File "/home/han/anaconda3/envs/llama/lib/python3.9/site-packages/torch/nn/modules/sparse.py", line 160, in
forward
return F.embedding(
File "/home/han/anaconda3/envs/llama/lib/python3.9/site-packages/torch/nn/functional.py", line 2210, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:3!
(when checking argument for argument index in method wrapper__index_select)
### Expected behavior
I hope the error can be fixed
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23754/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23754/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23753
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23753/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23753/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23753/events
|
https://github.com/huggingface/transformers/pull/23753
| 1,725,724,485 |
PR_kwDOCUB6oc5RVwCE
| 23,753 |
fix Whisper tests on GPU
|
{
"login": "hollance",
"id": 346853,
"node_id": "MDQ6VXNlcjM0Njg1Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hollance",
"html_url": "https://github.com/hollance",
"followers_url": "https://api.github.com/users/hollance/followers",
"following_url": "https://api.github.com/users/hollance/following{/other_user}",
"gists_url": "https://api.github.com/users/hollance/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hollance/subscriptions",
"organizations_url": "https://api.github.com/users/hollance/orgs",
"repos_url": "https://api.github.com/users/hollance/repos",
"events_url": "https://api.github.com/users/hollance/events{/privacy}",
"received_events_url": "https://api.github.com/users/hollance/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Also skipping a few tests in `WhisperModelTest` that were previously skipped in `WhisperEncoderModelTest`, see https://github.com/huggingface/transformers/pull/22060\r\n\r\nAlthough I just saw there's another open PR dealing with the same issue, so maybe none of these should be skipped: https://github.com/huggingface/transformers/pull/22803",
"Thanks again!"
] | 1,685 | 1,685 | 1,685 |
CONTRIBUTOR
| null |
# What does this PR do?
The daily CI showed that Whisper has new test failures, related to the recent merge of the prompting feature. This PR fixes those test failures.
The tests ran OK on CPU but failed on GPU because the input data wasn't moved to the GPU.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23753/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23753/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23753",
"html_url": "https://github.com/huggingface/transformers/pull/23753",
"diff_url": "https://github.com/huggingface/transformers/pull/23753.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23753.patch",
"merged_at": 1685452019000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23752
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23752/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23752/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23752/events
|
https://github.com/huggingface/transformers/pull/23752
| 1,725,674,333 |
PR_kwDOCUB6oc5RVk5q
| 23,752 |
Fix is_ninja_available()
|
{
"login": "niltok",
"id": 24362592,
"node_id": "MDQ6VXNlcjI0MzYyNTky",
"avatar_url": "https://avatars.githubusercontent.com/u/24362592?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/niltok",
"html_url": "https://github.com/niltok",
"followers_url": "https://api.github.com/users/niltok/followers",
"following_url": "https://api.github.com/users/niltok/following{/other_user}",
"gists_url": "https://api.github.com/users/niltok/gists{/gist_id}",
"starred_url": "https://api.github.com/users/niltok/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/niltok/subscriptions",
"organizations_url": "https://api.github.com/users/niltok/orgs",
"repos_url": "https://api.github.com/users/niltok/repos",
"events_url": "https://api.github.com/users/niltok/events{/privacy}",
"received_events_url": "https://api.github.com/users/niltok/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks for your PR! Can you just run `make style` on your branch to fix the quality issue?\r\n\r\ni have passed all checks now!!",
"Thanks!"
] | 1,685 | 1,685 | 1,685 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
i noticed that ninja cannot be detected by importlib because it is not a python package, so i fixed is_ninja_available() with an [implementation comes from pytorch](https://github.com/pytorch/pytorch/blob/4882cd08013733a5dbe299871ad7e974bce074b3/torch/utils/cpp_extension.py#L1629).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23752/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23752/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23752",
"html_url": "https://github.com/huggingface/transformers/pull/23752",
"diff_url": "https://github.com/huggingface/transformers/pull/23752.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23752.patch",
"merged_at": 1685045425000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23751
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23751/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23751/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23751/events
|
https://github.com/huggingface/transformers/pull/23751
| 1,725,672,913 |
PR_kwDOCUB6oc5RVklw
| 23,751 |
Fix psuh_to_hub in Trainer when nothing needs pushing
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,685 | 1,685 | 1,685 |
COLLABORATOR
| null |
# What does this PR do?
This PR fixes `push_to_hub` in the `Trainer`. Since `Repository.push_to_hub`can return `None` or a tuple, we have to do a small test before unpacking the output.
Fixes #23712
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23751/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23751/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23751",
"html_url": "https://github.com/huggingface/transformers/pull/23751",
"diff_url": "https://github.com/huggingface/transformers/pull/23751.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23751.patch",
"merged_at": 1685021889000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23750
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23750/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23750/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23750/events
|
https://github.com/huggingface/transformers/issues/23750
| 1,725,573,655 |
I_kwDOCUB6oc5m2ioX
| 23,750 |
sequences and scores dimensions are mismatch when using generate()
|
{
"login": "GasolSun36",
"id": 40892949,
"node_id": "MDQ6VXNlcjQwODkyOTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/40892949?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GasolSun36",
"html_url": "https://github.com/GasolSun36",
"followers_url": "https://api.github.com/users/GasolSun36/followers",
"following_url": "https://api.github.com/users/GasolSun36/following{/other_user}",
"gists_url": "https://api.github.com/users/GasolSun36/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GasolSun36/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GasolSun36/subscriptions",
"organizations_url": "https://api.github.com/users/GasolSun36/orgs",
"repos_url": "https://api.github.com/users/GasolSun36/repos",
"events_url": "https://api.github.com/users/GasolSun36/events{/privacy}",
"received_events_url": "https://api.github.com/users/GasolSun36/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"btw i think it's probably a mistake cause by `generation/utils.py` in `sample()` function, that when calculate the scores, it stops early. cause I look up my tensor dimension, it's always differ 208 dimensions (the sequences is 208 longer than the scores), that's equal to 208/8=26, which is my `input_ids_seq_length`\r\n\r\nthe sequences should be equal to scores, right?",
"Hi there, I am also having some issues with the shapes between sequences and scores. In my case, I am finding the length of the scores tuple is longer than the sequence length? Any idea why this would be? Seems like it's the opposite of your issue",
"Hey @GasolSun36 @aburns4 👋 \r\n\r\nThe behavior you see is what is expected. The docstrings are a bit outdated and in need of a retouch 🤗 In a nutshell:\r\n1. In `generate`, the scores are exclusively related to new tokens (tokens not in the prompt)\r\n2. The output sequence for decoder-only models (like BLOOM) includes the prompt as well\r\n3. If you want to obtain the logits for the prompt tokens, you should do run a model forward pass with your prompt (see below). Please note that the logits always refer to the next token, so the logits with index 0 correspond to the token with index 1.\r\n4. If you want to get the logits for the whole sequence (prompt + generated tokens), you have to concatenate these two sets of logits :)\r\n\r\n```py\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"distilgpt2\")\r\nmodel = AutoModelForCausalLM.from_pretrained(\"distilgpt2\")\r\n\r\ninputs = tokenizer([\"The quick brown\"], return_tensors=\"pt\")\r\nlogits = model(**inputs).logits\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I have the same error ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,685 | 1,691 | 1,691 |
NONE
| null |
### System Info
- `transformers` version: 4.29.2
- Platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@gante
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
generation_config = GenerationConfig(
temperature=0.7,
top_p=1,
top_k=40,
num_beams=1,
do_sample = True,
num_return_sequences = 8,
max_length = 109,
return_dict_in_generate = True,
output_scores = True
)
generator_outputs = generator_model.generate(
inputs["input_ids"],
attention_mask=inputs["attention_mask"],
generation_config=generation_config,
synced_gpus=True
) # (B*num_candidates, L)
generated_tokens = generator_outputs.sequences # (B*num_candidates, L+1)
generated_logits = torch.stack(generator_outputs.scores, dim=1)
generated_seqs = self.generator_tokenizer.batch_decode(
generated_tokens, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
# get the probability of generated tokens
seq_len = generated_logits.size(1)
vocab_size = generated_logits.size(-1)
generated_probs = nn.functional.softmax(generated_logits, dim=-1)
new_generated_probs = generated_probs.contiguous().view(-1, vocab_size)
generated_tokens_indices = generated_tokens.contiguous().view(-1).unsqueeze(1)
new_generated_probs = torch.gather(new_generated_probs, 1, generated_tokens_indices)
### Expected behavior
when i run the code to `new_generated_probs = torch.gather(new_generated_probs, 1, generated_tokens_indices)`, the error arise:
`RuntimeErrorRuntimeError: : Size does not match at dimension 0 expected index [1408, 1] to be smaller than self [1232, 250680] apart from dimension 1Size does not match at dimension 0 expected index [1968, 1] to be smaller than self [1744, 250680] apart from dimension 1`
I look carefully about the generate method, looks like the dimension of generator_outputs.sequences are different of generator_outputs.scores, cause in `transformers/src/transformers/generation/utils.py` 1363 lines:
`generation_config.max_length = generation_config.max_new_tokens + input_ids_seq_length`
however in `class SampleDecoderOnlyOutput`, it says:
`sequences (torch.LongTensor of shape (batch_size*num_return_sequences, sequence_length)):
The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter
if all batches finished early due to the eos_token_id.`
and
`scores (tuple(torch.FloatTensor) *optional*, returned when output_scores=True is passed or when config.output_scores=True):
Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax)
at each generation step. Tuple of torch.FloatTensor with up to max_new_tokens elements (one element for
each generated token), with each tensor of shape (batch_size*num_return_sequences, config.vocab_size)`.
`
that means, the length of sequences is equal to max_length, the scores is equal to max_new_tokens. The difference between the two values is `input_ids_seq_length`.
However, I can't use max_length and max_new_tokens together cause the max_new_tokens are more priority.
Is there anyway to deal with it?
I use my own dataset on Bloom. But I think this problem you can use any model on any dataset to reproduct it.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23750/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23750/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23749
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23749/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23749/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23749/events
|
https://github.com/huggingface/transformers/pull/23749
| 1,725,379,259 |
PR_kwDOCUB6oc5RUkSo
| 23,749 |
[LongFormer] code nits, removed unused parameters
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Okay, seems there was this #23343 but they are compatible"
] | 1,685 | 1,685 | 1,685 |
COLLABORATOR
| null |
# What does this PR do?
LongformerEmbeddings "position_embedding_type" parameter are not used.
Fixes #23730
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23749/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23749/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23749",
"html_url": "https://github.com/huggingface/transformers/pull/23749",
"diff_url": "https://github.com/huggingface/transformers/pull/23749.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23749.patch",
"merged_at": 1685023574000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23748
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23748/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23748/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23748/events
|
https://github.com/huggingface/transformers/issues/23748
| 1,725,350,165 |
I_kwDOCUB6oc5m1sEV
| 23,748 |
LION optimizer calling error
|
{
"login": "lucasjinreal",
"id": 21303438,
"node_id": "MDQ6VXNlcjIxMzAzNDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/21303438?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucasjinreal",
"html_url": "https://github.com/lucasjinreal",
"followers_url": "https://api.github.com/users/lucasjinreal/followers",
"following_url": "https://api.github.com/users/lucasjinreal/following{/other_user}",
"gists_url": "https://api.github.com/users/lucasjinreal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucasjinreal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucasjinreal/subscriptions",
"organizations_url": "https://api.github.com/users/lucasjinreal/orgs",
"repos_url": "https://api.github.com/users/lucasjinreal/repos",
"events_url": "https://api.github.com/users/lucasjinreal/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucasjinreal/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"As said previously, there is nothing we can do if you do not follow the issue template and post a code reproducer.",
"@sgugger hello, this is what I done:\r\n\r\n```\r\ntrainer = transformers.Trainer(\r\n model=model,\r\n train_dataset=train_data,\r\n eval_dataset=val_data,\r\n args=transformers.TrainingArguments(\r\n deepspeed=deepspeed,\r\n per_device_train_batch_size=micro_batch_size,\r\n gradient_accumulation_steps=gradient_accumulation_steps,\r\n warmup_ratio=0.1,\r\n num_train_epochs=num_epochs,\r\n learning_rate=learning_rate,\r\n # fp16=True,\r\n fp16=not int8_train,\r\n logging_steps=10,\r\n # optim=\"adamw_torch\",\r\n optim=\"paged_lion_32bit\",\r\n evaluation_strategy=\"steps\" if val_set_size > 0 else \"no\",\r\n save_strategy=\"steps\",\r\n eval_steps=50 if val_set_size > 0 else None,\r\n save_steps=50,\r\n output_dir=output_dir,\r\n save_total_limit=5,\r\n load_best_model_at_end=True if val_set_size > 0 else False,\r\n ddp_find_unused_parameters=False if ddp else None,\r\n group_by_length=group_by_length,\r\n report_to=\"wandb\" if use_wandb else None,\r\n run_name=wandb_run_name if use_wandb else None,\r\n ),\r\n data_collator=transformers.DataCollatorForSeq2Seq(\r\n tokenizer, pad_to_multiple_of=8, return_tensors=\"pt\", padding=True\r\n ),\r\n )\r\n```\r\n\r\nthe newly commited LION seems can not load properly, please have a test",
"This is not something I can reproduce as many of the objects you use are undefined.",
"I believe this is because of you're using previous version of `bitsandbytes`. In the latest version, there's an additional arguments called `is_paged`\r\n\r\nhttps://github.com/TimDettmers/bitsandbytes/blob/main/bitsandbytes/optim/lion.py#L9",
"@louisowen6 Hi, does there any minimal examples on how to enable LION optimizer with llama along with training with deepspeed?\r\nJust can't found such a detailed describe on this, it could be better if a experiment result compare with AdamW and LION on llama model",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,685 | 1,690 | 1,690 |
NONE
| null |
TypeError: Lion.__init__() got an unexpected keyword argument 'is_paged'
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23748/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23747
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23747/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23747/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23747/events
|
https://github.com/huggingface/transformers/pull/23747
| 1,725,061,966 |
PR_kwDOCUB6oc5RTgfO
| 23,747 |
Fix `pip install --upgrade accelerate` command in modeling_utils.py
|
{
"login": "tloen",
"id": 4811103,
"node_id": "MDQ6VXNlcjQ4MTExMDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4811103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tloen",
"html_url": "https://github.com/tloen",
"followers_url": "https://api.github.com/users/tloen/followers",
"following_url": "https://api.github.com/users/tloen/following{/other_user}",
"gists_url": "https://api.github.com/users/tloen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tloen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tloen/subscriptions",
"organizations_url": "https://api.github.com/users/tloen/orgs",
"repos_url": "https://api.github.com/users/tloen/repos",
"events_url": "https://api.github.com/users/tloen/events{/privacy}",
"received_events_url": "https://api.github.com/users/tloen/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,685 | 1,685 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Really anyone, I don't want to waste Tim Dettmers' time
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23747/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23747",
"html_url": "https://github.com/huggingface/transformers/pull/23747",
"diff_url": "https://github.com/huggingface/transformers/pull/23747.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23747.patch",
"merged_at": 1685015329000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23746
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23746/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23746/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23746/events
|
https://github.com/huggingface/transformers/issues/23746
| 1,724,991,366 |
I_kwDOCUB6oc5m0UeG
| 23,746 |
tf run_clm.py calls model.get_input_embeddings().weight which does not exist
|
{
"login": "wesboyt",
"id": 30701972,
"node_id": "MDQ6VXNlcjMwNzAxOTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/30701972?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wesboyt",
"html_url": "https://github.com/wesboyt",
"followers_url": "https://api.github.com/users/wesboyt/followers",
"following_url": "https://api.github.com/users/wesboyt/following{/other_user}",
"gists_url": "https://api.github.com/users/wesboyt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wesboyt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wesboyt/subscriptions",
"organizations_url": "https://api.github.com/users/wesboyt/orgs",
"repos_url": "https://api.github.com/users/wesboyt/repos",
"events_url": "https://api.github.com/users/wesboyt/events{/privacy}",
"received_events_url": "https://api.github.com/users/wesboyt/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @Rocketknight1 ",
"I was able to fix it locally by calling embeddings.hidden_size. Not sure if that is correct.",
"Hi @wesboyt, can you paste the exact command you called `run_clm.py` with?",
"--overwrite_output_dir --do_train --model_type gpt2 --tokenizer_name gpt2-it --train_file encoded.txt --do_eval --dataloader_num_workers 1 --output_dir out --block_size 256 --save_steps 100000 --validation_file features.txt --learning_rate 0.001 --num_train_epochs 1 --optim adamw_torch --per_device_train_batch_size 8 --config_overrides num_hidden_layers=14,n_head=16,vocab_size=13,hidden_size=1024",
"from what i see in the debugger the type of the actual embeddings variable is TFSharedEmbeddings\r\n\r\nI was able to fix it by using hidden_size it was succesfully training after that change\r\n",
"the actual hidden size variable in the debugger did not align with the config overrides hidden_size parameter, it was like 728 or 768 or something along those lines.",
"Hi @wesboyt, I can't reproduce the issue here - I was able to train `gpt2` using the TF `run_clm.py` script and didn't encounter these errors. Can you try the command I used and confirm that there isn't some environment issue on your machine?\r\n```python run_clm.py --model_name_or_path gpt2 --output_dir output --dataset_name wikitext --dataset_config_name wikitext-103-raw-v1 --block_size 128```",
"Yea that runs for me, maybe its related to some of the other parameters. Its all good i fixed it locally for me and if people encounter it in the future they should be able to find this.",
"I will close this, sorry to distract the team. I believe it happened because i used torch adamw inside of tf. Funny how it still trained after my hiddensize fix."
] | 1,684 | 1,685 | 1,685 |
NONE
| null |
### System Info
seems to happen on mac, linux, and windows, both python 3.9 and 3.11. currently using stable from pip.
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
if you run run_clm.py from scratch it will produce an error saying that embeddings.weight doesn't exist.
https://github.com/huggingface/transformers/blob/e45e756d22206ca8fa9fb057c8c3d8fa79bf81c6/examples/tensorflow/language-modeling/run_clm.py#L486
Looks like it details a temporary workaround, I'm guessing the get_input_embeddings function's contract has changed.
### Expected behavior
I would expect the weight variable to exist.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23746/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23746/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23745
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23745/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23745/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23745/events
|
https://github.com/huggingface/transformers/issues/23745
| 1,724,843,662 |
I_kwDOCUB6oc5mzwaO
| 23,745 |
Bug using revision param in AutoModelForCausalLM.from_pretrained
|
{
"login": "Forbu",
"id": 11457947,
"node_id": "MDQ6VXNlcjExNDU3OTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/11457947?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Forbu",
"html_url": "https://github.com/Forbu",
"followers_url": "https://api.github.com/users/Forbu/followers",
"following_url": "https://api.github.com/users/Forbu/following{/other_user}",
"gists_url": "https://api.github.com/users/Forbu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Forbu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Forbu/subscriptions",
"organizations_url": "https://api.github.com/users/Forbu/orgs",
"repos_url": "https://api.github.com/users/Forbu/repos",
"events_url": "https://api.github.com/users/Forbu/events{/privacy}",
"received_events_url": "https://api.github.com/users/Forbu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @sgugger who is more familiar with this, I won't have bandwidth to dive into this now. ",
"The revision argument is supported for weights but not for the code at the moment. Support will be added soon, but in the meantime you can download the revision for this repo and then use `from_pretrained` with a local folder and it will work.",
"Nice thanks you @sgugger !",
"@sgugger isn't this a security issue? When using `trust_remote_code=True`, there is a warning to explicitly pass a revision to make sure that you are running code that you have looked at. But IIUC, if you pass a `revision=\"commit SHA I have verified\"` it will actually load whatever code is on the `main` branch?",
"@samhavens This comes from the recent change we made to avoid duplicating the code files in all repos (now there is one source of truth). As I said we're working on a fix, should come tomorrow/early next week.",
"If you want to give it a try, the PR linked above should fix your issue.",
"Thanks @sgugger!"
] | 1,684 | 1,685 | 1,685 |
NONE
| null |
### System Info
2023-05-24 23:09:53.575434: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/transformers/commands/env.py:63: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
2023-05-24 23:10:05.261610: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:47] Overriding orig_value setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.29.2
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu)
- Jax version: 0.4.10
- JaxLib version: 0.4.10
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker
@younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I was trying to use the new shiny mpt model from the huggingface hub from a revision :
```python
from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer
import torch
import transformers
import accelerate
model_name = 'mosaicml/mpt-7b'
model = AutoModelForCausalLM.from_pretrained(model_name,
trust_remote_code=True,
revision="refs/pr/23",
device_map="auto"
)
```
But I stumble on this error after the using the above code :
`ValueError: MPTForCausalLM does not support `device_map='auto'` yet.`
The "auto" was indeed not supported in the main branch but we add a correction in the PR branch (so the argument revision="refs/pr/23")
I did some investigation and the model was indeed loading the main .py files :
```
Downloading (…)main/modeling_mpt.py: 100%
17.4k/17.4k [00:00<00:00, 1.12MB/s]
Downloading (…)in/param_init_fns.py: 100%
12.6k/12.6k [00:00<00:00, 971kB/s]
Downloading (…)resolve/main/norm.py: 100%
2.56k/2.56k [00:00<00:00, 131kB/s]
A new version of the following files was downloaded from https://huggingface.co/mosaicml/mpt-7b:
- norm.py
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
A new version of the following files was downloaded from https://huggingface.co/mosaicml/mpt-7b:
- param_init_fns.py
- norm.py
```
You can see the main/ here. I did manually check the modeling_mpt.py file it didn't have the PR changes.
So I did try to find where the bug where inside the transformers package ... (first time looking at the code).
I am a bit surprised !
Basicly the code rewrite the config values after having read it (it adds the information about the repo ids (in add_model_info_to_auto_map in generic.py in utils/ from the transformers package) something that seems normal.
```
"auto_map": {
"AutoConfig": "mosaicml/mpt-7b--configuration_mpt.MPTConfig",
"AutoModelForCausalLM": "mosaicml/mpt-7b--modeling_mpt.MPTForCausalLM"
}
```
It notably add the "--" string.
then in get_class_from_dynamic_module (in dynamic_module_utils.py) it has :
```
if "--" in class_reference:
repo_id, class_reference = class_reference.split("--")
# Invalidate revision since it's not relevant for this repo
revision = "main"
```
So the revision become "main" and from here we are done.
I suppose if i do a PR removing the revision overide some people will not be happy ?
### Expected behavior
The expected behaviour is to load the file from the PR branch. (not the main/)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23745/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23744
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23744/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23744/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23744/events
|
https://github.com/huggingface/transformers/issues/23744
| 1,724,817,824 |
I_kwDOCUB6oc5mzqGg
| 23,744 |
ImportError: cannot import name 'PartialState' from 'transformers.trainer_pt_utils'
|
{
"login": "hoangledoan",
"id": 83858447,
"node_id": "MDQ6VXNlcjgzODU4NDQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/83858447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoangledoan",
"html_url": "https://github.com/hoangledoan",
"followers_url": "https://api.github.com/users/hoangledoan/followers",
"following_url": "https://api.github.com/users/hoangledoan/following{/other_user}",
"gists_url": "https://api.github.com/users/hoangledoan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hoangledoan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hoangledoan/subscriptions",
"organizations_url": "https://api.github.com/users/hoangledoan/orgs",
"repos_url": "https://api.github.com/users/hoangledoan/repos",
"events_url": "https://api.github.com/users/hoangledoan/events{/privacy}",
"received_events_url": "https://api.github.com/users/hoangledoan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"That class is defined in Accelerate, not in `trainer_utils`.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,688 | 1,688 |
NONE
| null |
### System Info
`transformers` version: 4.28.0
`accelerate` version: 0.19.0
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`from transformers.trainer_pt_utils import PartialState`
### Expected behavior
it cannot import the class, eventhough I tried to downgrade transformers and install accelerate
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23744/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23744/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23743
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23743/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23743/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23743/events
|
https://github.com/huggingface/transformers/issues/23743
| 1,724,813,275 |
I_kwDOCUB6oc5mzo_b
| 23,743 |
BertTokenizer.save_vocabulary does not save the full vocab
|
{
"login": "dennymarcels",
"id": 12802916,
"node_id": "MDQ6VXNlcjEyODAyOTE2",
"avatar_url": "https://avatars.githubusercontent.com/u/12802916?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dennymarcels",
"html_url": "https://github.com/dennymarcels",
"followers_url": "https://api.github.com/users/dennymarcels/followers",
"following_url": "https://api.github.com/users/dennymarcels/following{/other_user}",
"gists_url": "https://api.github.com/users/dennymarcels/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dennymarcels/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dennymarcels/subscriptions",
"organizations_url": "https://api.github.com/users/dennymarcels/orgs",
"repos_url": "https://api.github.com/users/dennymarcels/repos",
"events_url": "https://api.github.com/users/dennymarcels/events{/privacy}",
"received_events_url": "https://api.github.com/users/dennymarcels/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey! Not sure which part of the documentation says that, but it is in the wrong! The additional special tokens are saved somewhere else, and properly handled. `Bert` is also one of our core model, but also an old one, which is why the doc might not be up to date. When loading the vocab, only the actual vocabulary is expected. \r\nTell me if this does not solve your confusion! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
### System Info
The current version still has the issue.
The [documentation](https://huggingface.co/transformers/v4.8.2/model_doc/bert.html?highlight=berttokenizer#transformers.BertTokenizer.save_vocabulary) states that `save_vocabulary` "save[s] only the vocabulary of the tokenizer **(vocabulary + added tokens)**" but the [code](https://github.com/huggingface/transformers/blob/e45e756d22206ca8fa9fb057c8c3d8fa79bf81c6/src/transformers/models/bert/tokenization_bert.py#L358) only deals with the vocab. I believe changing this line to:
```
for token, token_index in sorted(dict(self.vocab, **self.added_tokens_encoder).items(), key=lambda kv: kv[1]):
```
would solve it.
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Running `save_vocabulary` as is reproduces the issue.
### Expected behavior
The full (new) vocabulary should be saved to file.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23743/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23743/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23742
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23742/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23742/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23742/events
|
https://github.com/huggingface/transformers/pull/23742
| 1,724,792,123 |
PR_kwDOCUB6oc5RSnSA
| 23,742 |
Remove the multi step tokenization warning when using HF Data Collators
|
{
"login": "JulesGM",
"id": 3231217,
"node_id": "MDQ6VXNlcjMyMzEyMTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3231217?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JulesGM",
"html_url": "https://github.com/JulesGM",
"followers_url": "https://api.github.com/users/JulesGM/followers",
"following_url": "https://api.github.com/users/JulesGM/following{/other_user}",
"gists_url": "https://api.github.com/users/JulesGM/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JulesGM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JulesGM/subscriptions",
"organizations_url": "https://api.github.com/users/JulesGM/orgs",
"repos_url": "https://api.github.com/users/JulesGM/repos",
"events_url": "https://api.github.com/users/JulesGM/events{/privacy}",
"received_events_url": "https://api.github.com/users/JulesGM/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@sgugger",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23742). All of your documentation changes will be reflected on that endpoint.",
"Doing things this way could lead to weird (though not dangerous) race behaviour in threaded code.\r\n\r\nThe only way I see to not get that race behavior is to let `pad` take an extra argument to disable the warning, though I understand that API providers are very hesitant to alter APIs in any way\r\n ",
"The race behavior would just be that the warning would not be displayed ofc",
"The data collator cannot really be in threaded code. Different processes? Definitely for distributed training but then they will each have their own tokenizer. So I think it's a risk we can take.",
"I'm sorry @sgugger, you mean the branch name? Or the pull request name? Or the function name? I changed the name of the pull request.\r\n\r\nAlso, am I supposed to run `make quality; make style` first? & am I supposed to run some tests?",
"I synced the branch",
"The PR now deletes 26 doc files, so it's not really something we can merge :grimacing: ",
"OK I fixed the weird document files deletion. Really not sure how that happened. Sorry about that.",
"Could you just run `make style` to fix the formatting issues? Thanks!",
"done",
"There is still an error. Can you make sure to do `pip install transformers[\"quality\"] --upgrade` (to make sure you have a proper version of all the necessary libs)?",
"Mmm now we're back to 43 files changed :cry: ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,688 | 1,688 |
NONE
| null |
We create a new function that pads without warning the user to instead calling `tokenizer.forward` because it's faster.
We use it for Transformers' own DataCollator calls.
It doesn't make much sense that a DataCollator would change the state of a tokenizer imho, so every time:
- we save the state of the tokenizer with regards to the warning
- disable the warning
- pad
- restore the state of whether we want to warn or not.
See https://github.com/huggingface/transformers/issues/22638
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23742/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23742/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23742",
"html_url": "https://github.com/huggingface/transformers/pull/23742",
"diff_url": "https://github.com/huggingface/transformers/pull/23742.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23742.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23741
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23741/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23741/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23741/events
|
https://github.com/huggingface/transformers/pull/23741
| 1,724,411,790 |
PR_kwDOCUB6oc5RRUWL
| 23,741 |
asd
|
{
"login": "tubulocristate",
"id": 79148589,
"node_id": "MDQ6VXNlcjc5MTQ4NTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/79148589?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tubulocristate",
"html_url": "https://github.com/tubulocristate",
"followers_url": "https://api.github.com/users/tubulocristate/followers",
"following_url": "https://api.github.com/users/tubulocristate/following{/other_user}",
"gists_url": "https://api.github.com/users/tubulocristate/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tubulocristate/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tubulocristate/subscriptions",
"organizations_url": "https://api.github.com/users/tubulocristate/orgs",
"repos_url": "https://api.github.com/users/tubulocristate/repos",
"events_url": "https://api.github.com/users/tubulocristate/events{/privacy}",
"received_events_url": "https://api.github.com/users/tubulocristate/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23741). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,688 | 1,688 |
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23741/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23741/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23741",
"html_url": "https://github.com/huggingface/transformers/pull/23741",
"diff_url": "https://github.com/huggingface/transformers/pull/23741.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23741.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23740
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23740/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23740/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23740/events
|
https://github.com/huggingface/transformers/pull/23740
| 1,724,405,838 |
PR_kwDOCUB6oc5RRTEB
| 23,740 |
add type hint in pipeline model argument
|
{
"login": "y3sar",
"id": 16244698,
"node_id": "MDQ6VXNlcjE2MjQ0Njk4",
"avatar_url": "https://avatars.githubusercontent.com/u/16244698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/y3sar",
"html_url": "https://github.com/y3sar",
"followers_url": "https://api.github.com/users/y3sar/followers",
"following_url": "https://api.github.com/users/y3sar/following{/other_user}",
"gists_url": "https://api.github.com/users/y3sar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/y3sar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/y3sar/subscriptions",
"organizations_url": "https://api.github.com/users/y3sar/orgs",
"repos_url": "https://api.github.com/users/y3sar/repos",
"events_url": "https://api.github.com/users/y3sar/events{/privacy}",
"received_events_url": "https://api.github.com/users/y3sar/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I think I should add PretrainedModel and TFPretrainedModel in string form just like PreTrainedTokenizerFast was given in tokenizer argument. TYPE_CHECKING is false by default",
"LGTM.\r\n"
] | 1,684 | 1,685 | 1,685 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds type hints for model argument in pipeline function
## Who can review?
@Narsil
@amyeroberts
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23740/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23740/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23740",
"html_url": "https://github.com/huggingface/transformers/pull/23740",
"diff_url": "https://github.com/huggingface/transformers/pull/23740.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23740.patch",
"merged_at": 1685441158000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23739
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23739/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23739/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23739/events
|
https://github.com/huggingface/transformers/issues/23739
| 1,724,340,177 |
I_kwDOCUB6oc5mx1fR
| 23,739 |
Add DINOv2 to Transformers
|
{
"login": "EduardoPach",
"id": 69953243,
"node_id": "MDQ6VXNlcjY5OTUzMjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/69953243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EduardoPach",
"html_url": "https://github.com/EduardoPach",
"followers_url": "https://api.github.com/users/EduardoPach/followers",
"following_url": "https://api.github.com/users/EduardoPach/following{/other_user}",
"gists_url": "https://api.github.com/users/EduardoPach/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EduardoPach/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EduardoPach/subscriptions",
"organizations_url": "https://api.github.com/users/EduardoPach/orgs",
"repos_url": "https://api.github.com/users/EduardoPach/repos",
"events_url": "https://api.github.com/users/EduardoPach/events{/privacy}",
"received_events_url": "https://api.github.com/users/EduardoPach/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,689 | 1,689 |
NONE
| null |
### Feature request
Add DINOv2 to the transformers library.
Weights are available in [DINOv2 repo](https://github.com/facebookresearch/dinov2)
### Motivation
Currently, DINOv2 can be used through `torch.hub.load`, but having it ported to transformers directly would be nice to have, and since DINOv1 is already in the library it might not be that difficult to do
### Your contribution
I would love to do a PR to make this addition in the case that it's actually not something hard to do and someone could point me the way that I should look at.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23739/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23739/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23738
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23738/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23738/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23738/events
|
https://github.com/huggingface/transformers/pull/23738
| 1,724,336,925 |
PR_kwDOCUB6oc5RREFl
| 23,738 |
Remove the last few TF serving sigs
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
MEMBER
| null |
A couple of serving signatures arrived while the PR was open - this should remove the last of them.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23738/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23738/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23738",
"html_url": "https://github.com/huggingface/transformers/pull/23738",
"diff_url": "https://github.com/huggingface/transformers/pull/23738.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23738.patch",
"merged_at": 1684959585000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23737
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23737/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23737/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23737/events
|
https://github.com/huggingface/transformers/pull/23737
| 1,724,277,678 |
PR_kwDOCUB6oc5RQ3HI
| 23,737 |
Revamp test selection for the example tests
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,684 | 1,686 | 1,685 |
COLLABORATOR
| null |
# What does this PR do?
After the revamp of the test_fetcher, the test collection of the example tests wasn't working anymore: if only an example is changed, the corresponding tests are not run. This PR adapts the example tests collection to the new test fetcher and makes sure they are appropriately run. It's more fine-grained than the previous approach which ran all example tests as soon as when a diff was discovered: here the tests are run when the modifications impact the test examples.
To see this in action:
- at the [first commit](https://app.circleci.com/pipelines/github/huggingface/transformers/65204) the diff only impacts repo utils, so only the test repo utils job is ran (no example or other test jobs).
- at the [second commit](https://app.circleci.com/pipelines/github/huggingface/transformers/65207) the diff has a change in the PyTorch `run_glue`, so the PyTorch example test job is ran, but only on the trainer examples (not the no_trainer ones since they are not touched).
- at the [third commit](https://app.circleci.com/pipelines/github/huggingface/transformers/65208), we remove the fake example modif and add a fake change in the Trainer, which impacts all examples.
cc @ydshieh for when you're back from vacation.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23737/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23737/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23737",
"html_url": "https://github.com/huggingface/transformers/pull/23737",
"diff_url": "https://github.com/huggingface/transformers/pull/23737.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23737.patch",
"merged_at": 1685021902000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23736
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23736/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23736/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23736/events
|
https://github.com/huggingface/transformers/pull/23736
| 1,724,245,491 |
PR_kwDOCUB6oc5RQwJG
| 23,736 |
[Whisper] Reduce batch size in tests
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,687 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
The slowest PyTorch tests as of 10th May were reported as follows:
```
68.24s call tests/models/whisper/test_tokenization_whisper.py::WhisperTokenizerTest::test_maximum_encoding_length_pair_input
64.24s call tests/models/whisper/test_tokenization_whisper.py::WhisperTokenizerTest::test_maximum_encoding_length_single_input
62.17s call tests/models/whisper/test_tokenization_whisper.py::WhisperTokenizerTest::test_internal_consistency
60.57s call tests/models/whisper/test_tokenization_whisper.py::WhisperTokenizerTest::test_add_special_tokens
55.37s call tests/models/whisper/test_modeling_whisper.py::WhisperEncoderModelTest::test_model_outputs_equivalence
52.59s call tests/models/mobilebert/test_modeling_mobilebert.py::MobileBertModelTest::test_save_load_fast_init_from_base
...
```
Source: https://huggingface.slack.com/archives/C01NE71C4F7/p1683734816738159
Taking a deeper look, we see that the first of these four tests were Whisper tokeniser tests. Running locally, the Whisper tokenisation tests take a fraction of the time reported above. For instance, I can run the **entire** Whisper tokenisation test suite in 48s, with the longest tests from the CI taking less than 10x the time locally:
```
3.43s call tests/models/whisper/test_tokenization_whisper.py::WhisperTokenizerTest::test_maximum_encoding_length_pair_input
3.52s call tests/models/whisper/test_tokenization_whisper.py::WhisperTokenizerTest::test_maximum_encoding_length_single_input
3.43s call tests/models/whisper/test_tokenization_whisper.py::WhisperTokenizerTest::test_internal_consistency
0.34s call tests/models/whisper/test_tokenization_whisper.py::WhisperTokenizerTest::test_add_special_tokens
```
Checking more recent CI runs, we see that these tokenisation tests have disappeared from the top tests by time, e.g. for the PR [#23223](https://circleci-tasks-prod.s3.us-east-1.amazonaws.com/forks/storage/artifacts/d5b57382-7f67-4274-9623-7f238ef4fb6f/636881279/6382baaf-cb48-4b59-ad00-ab8ab3d42763/0/~/transformers/reports/tests_torch/durations.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIAQVFQINEOOAPIWJIW%2F20230524%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230524T150859Z&X-Amz-Expires=60&X-Amz-Security-Token=IQoJb3JpZ2luX2VjEMj%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaCXVzLWVhc3QtMSJHMEUCIFSTbUmgA5nKTMzcsI888FWe4%2BqsFRrE42wJsJkxpXrjAiEA7NGVGvb6DVyu%2FmuTm%2BLg07L6KkB5v%2Fv4yFpxIqql4AUqtAII8P%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FARADGgwwNDU0NjY4MDY1NTYiDHUW%2F9eGKhbQqo67WyqIAt%2FowvbsfkuLoidqZLHQeXrqZl55RY4AJnMGRfT0xY12jCvpjZ7F89p9cJbln9stqj6NpUSSaqzrO4L7XaHhmJ8I8ZRoUo7Uwk4J7ll3CvohzUJqVzYnEzeLvinEF0%2Bi0mx6ZT2DyP8bYjJI%2BWAu25gTByeny05l6xH9PNy7kxurax9scDnBc7Be0Gs56y74F2%2FVvrdCaxigY0wSNfgCXyasfX%2FIq87UP7y%2BVZIxDoL5zD1gbZmUulf3gL5VAcaOwOtkmhCwisjc%2BbTbUSHMTpJZ1D77U3mSmJVXUSQnzpNavl%2FVHXY3DOh45KFoQETemsTehhUlOCMyV91IrO%2BwLacXlzbLsQUMBDCo0LijBjqdAVgdasvAQc5feVYL4SuV5Re4TIrGh6cLLj689oFfoVilLj8AjcOj5GSFHUBCzH792fYCmTZIF%2B66qJ3ieBTup5C%2B1PkaoEZ3mQjAFi8fjUiDDwrFGWlCwTtOklha1HAMKPPmEIW4Q5nhp5OkvPU5CPrZcIm90VNLGCy7CunrrugV74llZpDAU5UwwkUJRYLp3aP8nifKGMDj924ubj8%3D&X-Amz-SignedHeaders=host&x-id=GetObject&X-Amz-Signature=9c473476aa6883dc476f641125a3b9ea33a8c5df489676e0ff1bab13129831b0) from the 18th May. Note that these are just generic tokenisation tests, and the Whisper tokeniser is one-to-one the same as GPT2, so we’d expect the same runtime here.
In this PR, we speed-up the Whisper modelling tests by a factor of ~4x by reducing the batch size from 13 -> 2, which should address the slow modelling tests. We'll monitor the Whisper tokenisation tests to see if they keep cropping up as the slowest PyTorch tests in the future and amend as necessary.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23736/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23736/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23736",
"html_url": "https://github.com/huggingface/transformers/pull/23736",
"diff_url": "https://github.com/huggingface/transformers/pull/23736.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23736.patch",
"merged_at": 1684945886000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23735
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23735/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23735/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23735/events
|
https://github.com/huggingface/transformers/pull/23735
| 1,724,196,568 |
PR_kwDOCUB6oc5RQlTS
| 23,735 |
fix: delete duplicate sentences in `document_question_answering.mdx`
|
{
"login": "jungnerd",
"id": 46880056,
"node_id": "MDQ6VXNlcjQ2ODgwMDU2",
"avatar_url": "https://avatars.githubusercontent.com/u/46880056?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jungnerd",
"html_url": "https://github.com/jungnerd",
"followers_url": "https://api.github.com/users/jungnerd/followers",
"following_url": "https://api.github.com/users/jungnerd/following{/other_user}",
"gists_url": "https://api.github.com/users/jungnerd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jungnerd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jungnerd/subscriptions",
"organizations_url": "https://api.github.com/users/jungnerd/orgs",
"repos_url": "https://api.github.com/users/jungnerd/repos",
"events_url": "https://api.github.com/users/jungnerd/events{/privacy}",
"received_events_url": "https://api.github.com/users/jungnerd/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
There are duplicate sentences in `document_question_answering.mdx` from line number 40 to 45, so delete the sentences from line number 43 to 45.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #23729
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23735/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23735/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23735",
"html_url": "https://github.com/huggingface/transformers/pull/23735",
"diff_url": "https://github.com/huggingface/transformers/pull/23735.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23735.patch",
"merged_at": 1684941650000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23734
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23734/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23734/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23734/events
|
https://github.com/huggingface/transformers/issues/23734
| 1,724,130,580 |
I_kwDOCUB6oc5mxCUU
| 23,734 |
Is it possible to use the transformers library with models, e.g. t5-small, commercially?
|
{
"login": "ozgesevgili",
"id": 6892804,
"node_id": "MDQ6VXNlcjY4OTI4MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6892804?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ozgesevgili",
"html_url": "https://github.com/ozgesevgili",
"followers_url": "https://api.github.com/users/ozgesevgili/followers",
"following_url": "https://api.github.com/users/ozgesevgili/following{/other_user}",
"gists_url": "https://api.github.com/users/ozgesevgili/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ozgesevgili/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ozgesevgili/subscriptions",
"organizations_url": "https://api.github.com/users/ozgesevgili/orgs",
"repos_url": "https://api.github.com/users/ozgesevgili/repos",
"events_url": "https://api.github.com/users/ozgesevgili/events{/privacy}",
"received_events_url": "https://api.github.com/users/ozgesevgili/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Questions like this are better suited on the [forums](https://discuss.huggingface.co/) as we keep issues for feature requests and bugs only. The Transformers library is Apache 2.0 so there is no problem using it for commercial use. Then it's up to the model you are using. As you noted `t5-small` should be fine.",
"Thanks for the quick reply.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,688 | 1,688 |
NONE
| null |
Hi!
I would like to use the transformers library with models, e.g. t5-small, commercially. I checked the license of the transformers library is Apache 2.0. And the license of the model, e.g. t5-small (https://huggingface.co/t5-small), is also Apache 2.0. So, can I use the library with such models, commercially?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23734/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23734/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23733
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23733/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23733/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23733/events
|
https://github.com/huggingface/transformers/issues/23733
| 1,724,110,444 |
I_kwDOCUB6oc5mw9Zs
| 23,733 |
High memory usage for BigBirdForPreTraining
|
{
"login": "kuben-joz",
"id": 8881518,
"node_id": "MDQ6VXNlcjg4ODE1MTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8881518?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kuben-joz",
"html_url": "https://github.com/kuben-joz",
"followers_url": "https://api.github.com/users/kuben-joz/followers",
"following_url": "https://api.github.com/users/kuben-joz/following{/other_user}",
"gists_url": "https://api.github.com/users/kuben-joz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kuben-joz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kuben-joz/subscriptions",
"organizations_url": "https://api.github.com/users/kuben-joz/orgs",
"repos_url": "https://api.github.com/users/kuben-joz/repos",
"events_url": "https://api.github.com/users/kuben-joz/events{/privacy}",
"received_events_url": "https://api.github.com/users/kuben-joz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Sorry, but this is lacking a lot of information, especially a reproducing script. Lots of things can impact the RAM taken by the model, this should be asked on the [forum](https://discuss.huggingface.co/top?period=weekly), not here. ",
"I added a reproducing script here:\r\nhttps://github.com/kuben-joz/bigbird-example/tree/master\r\n\r\nI thought it would be better to make a submission here rather than the forums as it concerns the implementation details of the model. If nevertheless you prefer for this to be moved to the forum I can do so.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,688 | 1,688 |
NONE
| null |
### System Info
DGX-A100
transformers 4.29.2
### Who can help?
@ArthurZucker
@ydshieh
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Run BigBirdForPreTraining with the BigBirdConfig left at default (except vocab size set to 32k). Essentially I recreated pretraining from section F.1 of the original publication https://arxiv.org/pdf/2007.14062.pdf
### Expected behavior
Should be able to fit a batch size of 4 in ~16 GB according to the paper but in reality a batch size of 4 exceeds 40GB on the forward call.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23733/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23733/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23732
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23732/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23732/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23732/events
|
https://github.com/huggingface/transformers/pull/23732
| 1,724,097,450 |
PR_kwDOCUB6oc5RQPdv
| 23,732 |
TF SAM memory reduction
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
MEMBER
| null |
Extremely small PR to use smaller dummy inputs to build SAM, which might help with memory issues on smaller devices.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23732/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23732/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23732",
"html_url": "https://github.com/huggingface/transformers/pull/23732",
"diff_url": "https://github.com/huggingface/transformers/pull/23732.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23732.patch",
"merged_at": 1684940343000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23731
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23731/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23731/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23731/events
|
https://github.com/huggingface/transformers/issues/23731
| 1,723,491,566 |
I_kwDOCUB6oc5mumTu
| 23,731 |
MBART/MBART-50 PreTraining ?
|
{
"login": "yash-srivastava19",
"id": 85068689,
"node_id": "MDQ6VXNlcjg1MDY4Njg5",
"avatar_url": "https://avatars.githubusercontent.com/u/85068689?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yash-srivastava19",
"html_url": "https://github.com/yash-srivastava19",
"followers_url": "https://api.github.com/users/yash-srivastava19/followers",
"following_url": "https://api.github.com/users/yash-srivastava19/following{/other_user}",
"gists_url": "https://api.github.com/users/yash-srivastava19/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yash-srivastava19/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yash-srivastava19/subscriptions",
"organizations_url": "https://api.github.com/users/yash-srivastava19/orgs",
"repos_url": "https://api.github.com/users/yash-srivastava19/repos",
"events_url": "https://api.github.com/users/yash-srivastava19/events{/privacy}",
"received_events_url": "https://api.github.com/users/yash-srivastava19/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @ArthurZucker ",
"Could you share a minimal reproducing script to isolate the bug that you are getting? \r\nSeems like the path that is trying to be reached is `./assamese_MBART`, check that the local file has a tokenizer, maybe use `use_local` only.",
"Hi !\r\n\r\nSimilar to the one given in the blog post, I started with something like : \r\n```\r\nfrom tokenizers import ByteLevelBPETokenizer\r\nfrom tokenizers.processors import BertProcessing\r\n\r\ntokenizer = ByteLevelBPETokenizer(\r\n \"/content/assamese_BART/vocab.json\",\r\n \"/content/assamese_BART/merges.txt\",)\r\n\r\ntokenizer._tokenizer.post_processor = BertProcessing(\r\n (\"</s>\", tokenizer.token_to_id(\"</s>\")),\r\n (\"<s>\", tokenizer.token_to_id(\"<s>\")),\r\n)\r\n\r\ntokenizer.enable_truncation(max_length=512)\r\n```\r\nThis seems to run fine, but here : \r\n\r\n```\r\nconfig = MBartConfig( vocab_size=50000,max_position_embeddings=32,num_attention_heads=2,\r\n num_hidden_layers=1,type_vocab_size=1,\r\n)\r\n\r\ntokenizer = MBart50TokenizerFast.from_pretrained(\"./assamese_BART\", max_len=64)\r\n\r\nmodel = AutoModelForMaskedLM.from_config(config=config)\r\nprint(model.num_parameters())\r\n```\r\nIn the `MBart50TokenizerFast.from_pretrained` line, it is showing that it doesn't recognize the path, but I used the same path to train two other models(RoBERTa and BART), and this exception was not raised. \r\n\r\nAlso, the local file has the tokenizer(`vocab.json` and `merges.txt`) in the same folder `./assamese_MBART`\r\n\r\nI'll try to use the `use_local` argument and let you know shortly..",
"The`use_local` argument didn't work either. I reckon the issue is due to the `BertProcessing`, which worked for BART and RoBERTa, but not for MBart50.\r\n\r\nI look forward to your answer. ",
"The blogpost is 3 years old, so very much outdated. If you look at the script you gave me, you are never saving the tokenizer to the `./assamese_BART` repo. ",
"I agree that the blog post is a little to old, but I couldn't find any relevant articles anywhere. \r\n\r\n> The blogpost is 3 years old, so very much outdated. If you look at the script you gave me, you are never saving the tokenizer to the `./assamese_BART` repo.\r\n\r\nI did save it, just that I forgot to include it in the code. The only problem is that I can't figure out why is it working for RoBERTa and BART, but not MBart50, as I need to implement that only 😿. \r\n\r\nAlso, the log produced : \r\n` ... Otherwise, make sure './assamese_BART' is the correct path to a directory containing all relevant files for a MBart50TokenizerFast tokenizer`\r\n\r\nApart from `vocab.json` and `merges.txt` does MBart50Tokenizer needs some other file due to which it is not reading it?\r\n\r\nIf you could point me/tell me how I can do it would be really really helpful 🙏🏻 ",
"No worries. Could you just give me a full reproducing script of what your are doing with MBart50,that way I can check if we indeed have aproblem with the blogpost and it might have to be update! ",
"I somehow managed to make it work by using `SentencePiece` tokenizer instead of the one given in the script. It now reads the spm file and works when it reads the `spm.model` file. This is what I did : \r\n\r\n```\r\nimport sentencepiece as spm \r\n\r\nspm.SentencePieceTrainer.Train(\r\n input='/content/as_mod.txt',\r\n model_prefix='spm',\r\n vocab_size=1000,\r\n pad_piece='<pad>',\r\n bos_piece='<s>',\r\n eos_piece='</s>',\r\n user_defined_symbols='<mask>',\r\n model_type='unigram'\r\n)\r\n\r\nsp = spm.SentencePieceProcessor()\r\nsp.Load('spm.model')\r\n```\r\n\r\nNow, instead of the `from_pretrained` method, I directly use the `spm.model` file. Code for that : \r\n```\r\nconfig = MBartConfig(\r\n vocab_size=1000,\r\n max_position_embeddings=32,\r\n num_attention_heads=2,\r\n num_hidden_layers=1,\r\n type_vocab_size=1,\r\n)\r\n\r\n# TODO: I believe, due to the BERTPreProcessor, the MBART doesn't seem to recognizes it.\r\ntokenizer = MBartTokenizerFast(vocab_file='/content/spm.model')\r\n\r\nmodel = AutoModelForMaskedLM.from_config(config=config)\r\n```\r\nThis seems to work, and it can read the model. However, now when I try to train the model, it gives an error which I haven't even seen in this context. First, the `Trainer` and `TrainerArguments` code :\r\n\r\n```\r\ntraining_args = TrainingArguments(\r\n output_dir='/content/assamese_BART',\r\n num_train_epochs=1,\r\n per_device_train_batch_size=32,\r\n save_steps=5000,\r\n save_total_limit=1,\r\n prediction_loss_only=True\r\n)\r\n\r\n# Set the trainer. \r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=dataset\r\n)\r\n``` \r\nAnd on running `trainer.train()`, I come across this scary error : \r\n\r\n```\r\n/usr/local/lib/python3.10/dist-packages/transformers/models/mbart/modeling_mbart.py in shift_tokens_right(input_ids, pad_token_id)\r\n 73 \r\n 74 index_of_eos = (prev_output_tokens.ne(pad_token_id).sum(dim=1) - 1).unsqueeze(-1)\r\n---> 75 decoder_start_tokens = prev_output_tokens.gather(1, index_of_eos).squeeze()\r\n 76 prev_output_tokens[:, 1:] = prev_output_tokens[:, :-1].clone()\r\n 77 prev_output_tokens[:, 0] = decoder_start_tokens\r\n\r\nRuntimeError: index -1 is out of bounds for dimension 1 with size 16\r\n```\r\n\r\nI can't seem to find anything on this anywhere. Can I get some help in this regard? This is when I didn't use GPU from colab environment, but when I used GPU the error was : \r\n\r\n```\r\nRuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect\r\n```",
"This just means that the tokens outputed by the tokenizer are not in the correct format. I recommend using `tokenizers` and the `tokenizers` from tranformers, otherwise your are mixing two libraries, which are not made to work together.",
"Ok. Will do 👍🏻 ",
"> No worries. Could you just give me a full reproducing script of what your are doing with MBart50,that way I can check if we indeed have a problem with the blogpost and it might have to be update!\r\n\r\nI'll link the Colab notebook here for your reference, if there is anything missing just tell : \r\nhttps://colab.research.google.com/drive/1cNfQn9nPITpwCS4i8P8loeN2SjLON1Tx?usp=sharing",
"Thanks for the collab. I can't help you if you keep using the spm model raw. I can help you regarding the original issue, that is if the tokenizer you are using is from `tokenizers` or `transformers`! 😉 ",
"Yeah, my bad. I accidentally gave gave you the different version of the same repo. Just wait a minute ...",
"> Thanks for the collab. I can't help you if you keep using the spm model raw. I can help you regarding the original issue, that is if the tokenizer you are using is from `tokenizers` or `transformers`! wink\r\n\r\nhttps://colab.research.google.com/drive/1cNfQn9nPITpwCS4i8P8loeN2SjLON1Tx?usp=sharing\r\n\r\nCheck(I think it's the same link, but the changes are made)",
"Notebook doesn't work for me! Could you check? (ALso I think you gave me writing rights 😅 so I might have changed things, dont't give them to me I'll copy your notebook)",
"Sorry for the late reply...\r\n\r\n> Notebook doesn't work for me! Could you check? (ALso I think you gave me writing rights sweat_smile so I might have changed things, don't give them to me I'll copy your notebook)\r\n\r\nWhat exactly doesn't work? Can you access the data ?\r\n\r\nChanged the settings to view only.\r\nhttps://colab.research.google.com/drive/1cNfQn9nPITpwCS4i8P8loeN2SjLON1Tx?usp=sharing\r\n\r\nI think it should work just fine...",
"Again, the notebook does not work out of the box. Try to open a private tab and run it without the cached inputs. Anyway after debugging, it's just that you are trying to convert a BertTokenizer (which is a `simple` basic tokenizer) to a MBart50 tokenizer, which is based on `sentencepiece`. This is impossible and this is also the reason why it is failing : the `.json` file is ignored because it is looking for a `.bpe` file. ",
"Ok , I think I understand the problem. Thanks a lot for being patient with me, as this was my first issue. I tried to do my best here."
] | 1,684 | 1,685 | 1,685 |
NONE
| null |
I am trying to pre-train a MBART-50 model from scratch(using the IndicCorpus dataset), similar to the one given in this blog post(https://huggingface.co/blog/how-to-train). There seems to be no issue when I do for RoBERTa and BART, but for MBART and MBART-50 architecture, the model loading fails, and I get the following error message :
`OSError: Can't load tokenizer for '/path/to/mbart-model. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure './assamese_MBART' is the correct path to a directory containing all relevant files for a MBartTokenizerFast tokenizer.`
I wanted to ask, whether it is an issue with the way I have implemented the tokenizer(similar to the one given the blog post(without adding language codes and special tokens)) ; or the way the data is there(Line by Line in the language) or it is something else altogether.
If there is any other clarification from my side, I'm more than happy to justify. @patrickvonplaten
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23731/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23731/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23730
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23730/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23730/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23730/events
|
https://github.com/huggingface/transformers/issues/23730
| 1,723,360,878 |
I_kwDOCUB6oc5muGZu
| 23,730 |
LongformerEmbeddings "position_embedding_type" parameter are not used.
|
{
"login": "doveppp",
"id": 54977106,
"node_id": "MDQ6VXNlcjU0OTc3MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/54977106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/doveppp",
"html_url": "https://github.com/doveppp",
"followers_url": "https://api.github.com/users/doveppp/followers",
"following_url": "https://api.github.com/users/doveppp/following{/other_user}",
"gists_url": "https://api.github.com/users/doveppp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/doveppp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/doveppp/subscriptions",
"organizations_url": "https://api.github.com/users/doveppp/orgs",
"repos_url": "https://api.github.com/users/doveppp/repos",
"events_url": "https://api.github.com/users/doveppp/events{/privacy}",
"received_events_url": "https://api.github.com/users/doveppp/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hey! Thanks for reporting this. I'm opening a PR to remove the unused parts, however I don't think it has to support the `position_embedding_type` as the model did not use it. "
] | 1,684 | 1,685 | 1,685 |
NONE
| null |
### System Info
- `transformers` version: 4.28.1
- Platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.28
- Python version: 3.9.7
- Huggingface_hub version: 0.13.4
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger
@ArthurZucker
@younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
In transformers.models.longformer.modeling_longformer.py file:
```python
class LongformerEmbeddings(nn.Module):
"""
Same as BertEmbeddings with a tiny tweak for positional embeddings indexing.
"""
def __init__(self, config):
super().__init__()
self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id)
self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size)
self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.hidden_size)
# self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load
# any TensorFlow checkpoint file
self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.position_embedding_type = getattr(config, "position_embedding_type", "absolute")
self.padding_idx = config.pad_token_id
self.position_embeddings = nn.Embedding(
config.max_position_embeddings, config.hidden_size, padding_idx=self.padding_idx
)
```
"position_embedding_type" are not work. By the way, self.position_embeddings has redundant initialization
### Expected behavior
Add support for “position_embedding_type”
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23730/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23730/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23729
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23729/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23729/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23729/events
|
https://github.com/huggingface/transformers/issues/23729
| 1,723,299,958 |
I_kwDOCUB6oc5mt3h2
| 23,729 |
[docs] duplicate sentences in `document_question_answering.mdx`
|
{
"login": "jungnerd",
"id": 46880056,
"node_id": "MDQ6VXNlcjQ2ODgwMDU2",
"avatar_url": "https://avatars.githubusercontent.com/u/46880056?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jungnerd",
"html_url": "https://github.com/jungnerd",
"followers_url": "https://api.github.com/users/jungnerd/followers",
"following_url": "https://api.github.com/users/jungnerd/following{/other_user}",
"gists_url": "https://api.github.com/users/jungnerd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jungnerd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jungnerd/subscriptions",
"organizations_url": "https://api.github.com/users/jungnerd/orgs",
"repos_url": "https://api.github.com/users/jungnerd/repos",
"events_url": "https://api.github.com/users/jungnerd/events{/privacy}",
"received_events_url": "https://api.github.com/users/jungnerd/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I'd prefer candidate 1 personally. Thanks for reporting and looking forward to your PR with a fix!",
"Thanks for your feedback and I agree that candidate 1 is better. I opened PR #23735."
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
### Description
There are duplicate sentences in `document_question_answering.mdx` from line number 40 to 45.
### Document / Language
`document_question_answering.mdx` / [en](https://huggingface.co/docs/transformers/tasks/document_question_answering)
### Suggestion

should be either:
<table>
<tr>
<td> candidate 1 </td> <td> candidate 2 </td>
</tr>
<tr>
<td>
(...), to predict the positions of the start and end tokens of the answer. (...)
</td>
<td>
(...), in order to predict which token is at the start of the answer and which token is at the end of the answer. (...)
</td>
</tr>
</table>
Please let me know which of the two candidates you would prefer.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23729/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23729/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23728
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23728/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23728/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23728/events
|
https://github.com/huggingface/transformers/issues/23728
| 1,723,275,083 |
I_kwDOCUB6oc5mtxdL
| 23,728 |
RuntimeError: Expected to mark a variable ready only once.
|
{
"login": "lucasjinreal",
"id": 21303438,
"node_id": "MDQ6VXNlcjIxMzAzNDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/21303438?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucasjinreal",
"html_url": "https://github.com/lucasjinreal",
"followers_url": "https://api.github.com/users/lucasjinreal/followers",
"following_url": "https://api.github.com/users/lucasjinreal/following{/other_user}",
"gists_url": "https://api.github.com/users/lucasjinreal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucasjinreal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucasjinreal/subscriptions",
"organizations_url": "https://api.github.com/users/lucasjinreal/orgs",
"repos_url": "https://api.github.com/users/lucasjinreal/repos",
"events_url": "https://api.github.com/users/lucasjinreal/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucasjinreal/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,690 | 1,690 |
NONE
| null |
```
RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across
multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if
you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready
multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations.
Parameter at index 129 with name base_model.model.model.layers.31.self_attn.v_proj.lora_B.default.weight has been marked as ready twice. This means that multiple autograd engine hooks have fired for this particular parameter
```
After upgrade transformers, train Lora models got above error.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23728/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23727
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23727/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23727/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23727/events
|
https://github.com/huggingface/transformers/issues/23727
| 1,723,235,034 |
I_kwDOCUB6oc5mtnra
| 23,727 |
Can not using LION optimizer
|
{
"login": "lucasjinreal",
"id": 21303438,
"node_id": "MDQ6VXNlcjIxMzAzNDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/21303438?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucasjinreal",
"html_url": "https://github.com/lucasjinreal",
"followers_url": "https://api.github.com/users/lucasjinreal/followers",
"following_url": "https://api.github.com/users/lucasjinreal/following{/other_user}",
"gists_url": "https://api.github.com/users/lucasjinreal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucasjinreal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucasjinreal/subscriptions",
"organizations_url": "https://api.github.com/users/lucasjinreal/orgs",
"repos_url": "https://api.github.com/users/lucasjinreal/repos",
"events_url": "https://api.github.com/users/lucasjinreal/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucasjinreal/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"We can't do anything to help without knowing the code that triggered the error.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,688 | 1,688 |
NONE
| null |
### Feature request
Error:
```
RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across
multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if
you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready
multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations.
Parameter at index 129 has been marked as ready twice. This means that multiple autograd engine hooks have fired for this particular parameter during this iteration. You can set the environment variable TORCH_DISTRIBUTED_DEBUG to
either INFO or DETAIL to print parameter names for further debugging.
0%|
```
Does there any example code of using LION optimzier?
I using code from here https://github.com/lucidrains/lion-pytorch/blob/main/lion_pytorch/lion_pytorch.py but got error above.
### Motivation
LION need to be support since it converges more faster.
### Your contribution
no
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23727/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23727/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23726
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23726/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23726/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23726/events
|
https://github.com/huggingface/transformers/issues/23726
| 1,723,114,465 |
I_kwDOCUB6oc5mtKPh
| 23,726 |
ValueError: Attempting to unscale FP16 gradients.
|
{
"login": "lucasjinreal",
"id": 21303438,
"node_id": "MDQ6VXNlcjIxMzAzNDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/21303438?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucasjinreal",
"html_url": "https://github.com/lucasjinreal",
"followers_url": "https://api.github.com/users/lucasjinreal/followers",
"following_url": "https://api.github.com/users/lucasjinreal/following{/other_user}",
"gists_url": "https://api.github.com/users/lucasjinreal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucasjinreal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucasjinreal/subscriptions",
"organizations_url": "https://api.github.com/users/lucasjinreal/orgs",
"repos_url": "https://api.github.com/users/lucasjinreal/repos",
"events_url": "https://api.github.com/users/lucasjinreal/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucasjinreal/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,684 | 1,684 | 1,684 |
NONE
| null |
### System Info
V100 torch2./0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When i train a LLMs with this
```
# data_collator = DataCollatorForSupervisedDataset(tokenizer=tokenizer)
data_collator = DataCollatorForSeq2Seq(
tokenizer, pad_to_multiple_of=8, return_tensors="pt", padding=True
)
```
it throws above error, but thwn I using self-defined one, it works OK. why?
```
@dataclass
class DataCollatorForSupervisedDataset(object):
"""Collate examples for supervised fine-tuning."""
tokenizer: transformers.PreTrainedTokenizer
def __call__(self, instances: Sequence[Dict]) -> Dict[str, torch.Tensor]:
input_ids, labels = tuple(
[instance[key] for instance in instances] for key in ("input_ids", "labels")
)
input_ids = torch.nn.utils.rnn.pad_sequence(
input_ids, batch_first=True, padding_value=self.tokenizer.pad_token_id
)
labels = torch.nn.utils.rnn.pad_sequence(
labels, batch_first=True, padding_value=-100
)
return dict(
input_ids=input_ids,
labels=labels,
attention_mask=input_ids.ne(self.tokenizer.pad_token_id),
)
```
the only differences here is the data collator
### Expected behavior
should trainable with fp16
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23726/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23726/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23725
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23725/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23725/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23725/events
|
https://github.com/huggingface/transformers/pull/23725
| 1,722,876,069 |
PR_kwDOCUB6oc5RMGUZ
| 23,725 |
Fix the regex in `get_imports` to support multiline try blocks and excepts with specific exception types
|
{
"login": "dakinggg",
"id": 43149077,
"node_id": "MDQ6VXNlcjQzMTQ5MDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/43149077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dakinggg",
"html_url": "https://github.com/dakinggg",
"followers_url": "https://api.github.com/users/dakinggg/followers",
"following_url": "https://api.github.com/users/dakinggg/following{/other_user}",
"gists_url": "https://api.github.com/users/dakinggg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dakinggg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dakinggg/subscriptions",
"organizations_url": "https://api.github.com/users/dakinggg/orgs",
"repos_url": "https://api.github.com/users/dakinggg/repos",
"events_url": "https://api.github.com/users/dakinggg/events{/privacy}",
"received_events_url": "https://api.github.com/users/dakinggg/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Done!"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes the regex in `get_imports` to support multiline try blocks and excepts with specific exception types, by
1. adding `re.DOTALL` so that new lines are matched in the try block
2. adding `.*?` after the except so that it will match things like `except ImportError`
Fixes #23667
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23725/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23725/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23725",
"html_url": "https://github.com/huggingface/transformers/pull/23725",
"diff_url": "https://github.com/huggingface/transformers/pull/23725.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23725.patch",
"merged_at": 1684957219000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23724
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23724/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23724/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23724/events
|
https://github.com/huggingface/transformers/pull/23724
| 1,722,866,578 |
PR_kwDOCUB6oc5RMET8
| 23,724 |
fix: Whisper generate, move text_prompt_ids trim up for max_new_tokens calculation
|
{
"login": "connor-henderson",
"id": 78612354,
"node_id": "MDQ6VXNlcjc4NjEyMzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/78612354?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/connor-henderson",
"html_url": "https://github.com/connor-henderson",
"followers_url": "https://api.github.com/users/connor-henderson/followers",
"following_url": "https://api.github.com/users/connor-henderson/following{/other_user}",
"gists_url": "https://api.github.com/users/connor-henderson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/connor-henderson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/connor-henderson/subscriptions",
"organizations_url": "https://api.github.com/users/connor-henderson/orgs",
"repos_url": "https://api.github.com/users/connor-henderson/repos",
"events_url": "https://api.github.com/users/connor-henderson/events{/privacy}",
"received_events_url": "https://api.github.com/users/connor-henderson/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks!"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #23723
Moves trimming the length of the text_prompt_ids further up so it is performed before the calculation determining the new `max_new_tokens` [here](https://github.com/huggingface/transformers/blob/003a0cf8cc4d78e47ef9debfb1e93a5c1197ca9a/src/transformers/models/whisper/modeling_whisper.py#L1645-L1648). As mentioned in the issue, this previously led to two issues with prompting: under certain circumstances `generate` could throw a nebulous and the `max_new_tokens` was not properly enforced when a prompt longer than the context + `max_new_tokens` was provided.
Happy to add a test for either bug if wanted.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@hollance @sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23724/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23724/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23724",
"html_url": "https://github.com/huggingface/transformers/pull/23724",
"diff_url": "https://github.com/huggingface/transformers/pull/23724.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23724.patch",
"merged_at": 1684942461000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23723
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23723/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23723/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23723/events
|
https://github.com/huggingface/transformers/issues/23723
| 1,722,850,708 |
I_kwDOCUB6oc5msJ2U
| 23,723 |
Two bugs in whisper generate with `prompt_ids` regarding generation length
|
{
"login": "connor-henderson",
"id": 78612354,
"node_id": "MDQ6VXNlcjc4NjEyMzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/78612354?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/connor-henderson",
"html_url": "https://github.com/connor-henderson",
"followers_url": "https://api.github.com/users/connor-henderson/followers",
"following_url": "https://api.github.com/users/connor-henderson/following{/other_user}",
"gists_url": "https://api.github.com/users/connor-henderson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/connor-henderson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/connor-henderson/subscriptions",
"organizations_url": "https://api.github.com/users/connor-henderson/orgs",
"repos_url": "https://api.github.com/users/connor-henderson/repos",
"events_url": "https://api.github.com/users/connor-henderson/events{/privacy}",
"received_events_url": "https://api.github.com/users/connor-henderson/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Thanks for the detailed write-up and reproducible code snippet @connor-henderson! Cool that you've found a fix to both already 🙌 By the sounds of it, I agree that the PR should fix both issues by bumping the token slicing logic to before the change of max new tokens"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.30.0.dev0
- Platform: macOS-13.0-arm64-arm-64bit
- Python version: 3.9.16
- Huggingface_hub version: 0.12.0
- Safetensors version: 0.2.8
- PyTorch version (GPU?): 1.13.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.5.3 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: no
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```py
# -*- coding: utf-8 -*-
# the above line is for the `prompt_for_error`
from datasets import load_dataset
from transformers import WhisperForConditionalGeneration, WhisperProcessor
processor = WhisperProcessor.from_pretrained("openai/whisper-tiny", language="English", task="transcribe")
processor = WhisperProcessor.from_pretrained("openai/whisper-tiny", language="English", task="transcribe")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny")
it = iter(load_dataset("librispeech_asr", "all", split="test.other", streaming=True))
while it:
_ = [next(it) for x in range(3)]
clip = next(it)
if clip["id"] == '7902-96592-0026':
break
input_features = processor(clip['audio']['array'], sampling_rate=clip['audio']['sampling_rate'], return_tensors="pt").input_features
# Example of it not limiting generation to max_new_tokens when prompt_ids length too large
long_prompt = 5 * "Bubalina is a subtribe of wild cattle that includes the various species of true buffalo. Species include the African buffalo, the anoas, and the wild water buffalo (including the domesticated variant water buffalo. Buffaloes can be found naturally in sub-Saharan Africa, South Asia and Southeast Asia, and domestic and feral populations have been introduced to Europe, the Americas, and Australia. In addition to the living species, bubalinans have an extensive fossil record where remains have been found in much of Afro-Eurasia."
prompt_ids = processor.get_prompt_ids(long_prompt)
pred_ids = model.generate(input_features, language="english", task="transcribe", max_new_tokens=10, prompt_ids=prompt_ids)
decoded = processor.decode(pred_ids[0], skip_special_tokens=True)
new_tokens = processor.tokenizer(decoded, add_special_tokens=False)["input_ids"]
print(len(new_tokens)) # should be <=10, is actually 25
# Example of erroring
prompt_for_error = "some text rich in domain specific vocabulary lives here - I wish you would believe me that I am in as great trouble about it as you are - then as archiestered in the dark literally a gas for the astonishment here at the faint and wrestling once more and again all with silent - I'll soon show them that I am not going to be played with - to do this he must scheme lie head till morning then make for the nearest point it's signal for help I also boats crew were already searching for him how to escape - no that was too bad you cannot do that - but there was no chance for his body there the head would not go first - shall I come to father? no - what a queer dream he thought to himself - and I am hungry too 今晚會是我 再回家吧 - oh those bars he meant 雷 exclaimed and he was advancing towards them, and just as he drew near there was a wrestling noise nd to the window a couple of hands seized the bars there was a scratching of 布側 against stonework and ram スペース 敬射的 金融 敬射的 金融 敬射的 金融 敬射的 金融 敬射的 金融 敬射的 金融 � - I saw you last night and wondered whose boy he was - I think I don't know you Mr. Orphazard "
prompt_ids = processor.get_prompt_ids(prompt_for_error)
pred_ids = model.generate(input_features, language="english", task="transcribe", max_new_tokens=128, prompt_ids=prompt_ids)
```
### Expected behavior
Two issues arising when using whisper generate with `prompt_ids`:
1. `max_new_tokens` doesn't properly limit the generation of new tokens when the length of the provided `prompt_ids` is too large
2. An unclear error is thrown with certain long prompt + audio combinations, less clear on this one right now (thank you @dgram0 for raising this in https://github.com/huggingface/transformers/pull/22496#issuecomment-1559317037)
I believe they have the same root cause where if `prompt_ids` are provided, the max_new_tokens is recalculated using the length of the `text_prompt_ids` but before they are trimmed to fit within the context. I'm not certain yet how 2. is caused / fixed by this, but I think its because with a confusing prompt + audio combo the model doesn't know when to stop and needs `max_new_tokens` to be set properly, otherwise it'll index error. I can confirm that fixing the max_new_tokens recalculation fixes both issues in the example script.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23723/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23723/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23712
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23712/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23712/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23712/events
|
https://github.com/huggingface/transformers/issues/23712
| 1,722,552,905 |
I_kwDOCUB6oc5mrBJJ
| 23,712 |
Trainer.repo.push_to_hub returns None, causing raised exception
|
{
"login": "RobertBaruch",
"id": 1783950,
"node_id": "MDQ6VXNlcjE3ODM5NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1783950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RobertBaruch",
"html_url": "https://github.com/RobertBaruch",
"followers_url": "https://api.github.com/users/RobertBaruch/followers",
"following_url": "https://api.github.com/users/RobertBaruch/following{/other_user}",
"gists_url": "https://api.github.com/users/RobertBaruch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RobertBaruch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RobertBaruch/subscriptions",
"organizations_url": "https://api.github.com/users/RobertBaruch/orgs",
"repos_url": "https://api.github.com/users/RobertBaruch/repos",
"events_url": "https://api.github.com/users/RobertBaruch/events{/privacy}",
"received_events_url": "https://api.github.com/users/RobertBaruch/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @Wauplin can we have a consistent return type? That would solve this issue.",
"Hmm, what do you mean by _a consistent return type_ ? If nothing is pushed, we can't really return a CommandInProgress object. In general I would prefer not to touch the return type of a method that seems to have been around for 2 years and that might be integrated in a lot of scripts already.\r\n\r\n(+ I expect the usage of `Repository` to slowly disappear once we switch to `upload_folder`)",
"I mean always a tuple so we don't have to make weird workarounds. But I will do the weird workaround in Transformers to fix this then."
] | 1,684 | 1,685 | 1,685 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.28.1
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.11.2
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
For some root cause that I'm not certain of, `Trainer.repo.push_to_hub` can return `None`, which causes `Trainer._push_from_checkpoint` to raise an exception (as it expects a tuple to be returned).
```
Traceback (most recent call last):
File "F:\eo-reco\run_speech_recognition_ctc.py", line 810, in <module>
main()
File "F:\eo-reco\run_speech_recognition_ctc.py", line 756, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\eo-reco\.env\Lib\site-packages\transformers\trainer.py", line 1664, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "F:\eo-reco\.env\Lib\site-packages\transformers\trainer.py", line 2019, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "F:\eo-reco\.env\Lib\site-packages\transformers\trainer.py", line 2308, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "F:\eo-reco\.env\Lib\site-packages\transformers\trainer.py", line 2462, in _save_checkpoint
self._push_from_checkpoint(output_dir)
File "F:\eo-reco\.env\Lib\site-packages\transformers\trainer.py", line 3649, in _push_from_checkpoint
_, self.push_in_progress = self.repo.push_to_hub(
^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: cannot unpack non-iterable NoneType object
```
(Note: line numbers in `run_speech_recognition_ctc.py` will not be accurate, as I've copied it and modified it)
`repo.push_to_hub` can return `None` if the repo is clean, which will cause the issue. However, that might not have happened in my case, since there was no corresponding log message about that (assuming logging would immediately be logged, and not buffered).
### Expected behavior
No exception, maybe just a warning.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23712/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23712/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23701
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23701/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23701/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23701/events
|
https://github.com/huggingface/transformers/pull/23701
| 1,722,415,628 |
PR_kwDOCUB6oc5RKil9
| 23,701 |
Bug fix - flip_channel_order for channels first images
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,685 | 1,685 |
COLLABORATOR
| null |
# What does this PR do?
The `flip_channel_order` function for rotating the channel order of images from BGR -> RGB had a bug when the input image was in channels first order.
Previously, the rows of pixels would be rotated, rather than the channel orders i.e. `image = image[:, ::-1, ...]` instead of `image = image[::-1, ...]`.
For the current image processors and pipelines, this path would only be triggered if `do_resize` was overridden and set to `False`. The method is used in 3 model's image processing files:
* LayoutLMV. If `do_resize=False` no batches could be prepared as the images would be of different sizes and they do no additional cropping or padding.
* LayoutLMV3 (just imported - wasn't used by the image processor)
* MobileViT
This PR:
* Moves the common logic into the image transforms library
* Resolves the bug
* Adds tests
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23701/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23701/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23701",
"html_url": "https://github.com/huggingface/transformers/pull/23701",
"diff_url": "https://github.com/huggingface/transformers/pull/23701.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23701.patch",
"merged_at": 1685549547000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23700
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23700/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23700/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23700/events
|
https://github.com/huggingface/transformers/issues/23700
| 1,722,406,546 |
I_kwDOCUB6oc5mqdaS
| 23,700 |
Help with TrOCR training for Spanish
|
{
"login": "rubenaros",
"id": 62561189,
"node_id": "MDQ6VXNlcjYyNTYxMTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/62561189?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rubenaros",
"html_url": "https://github.com/rubenaros",
"followers_url": "https://api.github.com/users/rubenaros/followers",
"following_url": "https://api.github.com/users/rubenaros/following{/other_user}",
"gists_url": "https://api.github.com/users/rubenaros/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rubenaros/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rubenaros/subscriptions",
"organizations_url": "https://api.github.com/users/rubenaros/orgs",
"repos_url": "https://api.github.com/users/rubenaros/repos",
"events_url": "https://api.github.com/users/rubenaros/events{/privacy}",
"received_events_url": "https://api.github.com/users/rubenaros/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Working on that as well, I'm pretty sure the problem is the lenght of your dataset, if i'm not wrong, the tokeniser takes parts of the words and relates it to fragments of the image, so you need thousands of pics with different words so the model can get enought embbedings. Deduced that since the results with my train dataset were perfect, but for the eval dataset was terrible. You can also notice this since if you use the pretrained model without fine tuning and with an image with spanish text, the output will be mostly words that mostly sounds like english. Only way to fix this is to make a dataset as big as IAM"
] | 1,684 | 1,692 | 1,688 |
NONE
| null |
### System Info
I'm using Torch 2.0 and Transformers 4.28.0 running in a Google Colab
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi
Thanks @NielsRogge for your incredible work in Transformers.
I'm working in develop a handwritten system recognition in spanish, so i choose TrOCR for the traduction from handwrite to text in spanish.
I think I followed yours Notebooks examples to fine tune and inference with TrOCR and many post with people with the same problem when we need to train TrOCR in a diferent language, spanish for me.
The code to create dataset is:
class SpanishDataset(Dataset):
def __init__(self, root_dir, df, processor, max_target_length=128):
self.root_dir = root_dir
self.df = df
self.processor = processor
self.max_target_length = max_target_length
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
# get file name + text
file_name = self.df['Path'][idx]
text = self.df['Text'][idx]
# prepare image (i.e. resize + normalize)
image = Image.open(self.root_dir + file_name).convert("RGB")
pixel_values = self.processor(image, return_tensors="pt").pixel_values
# add labels (input_ids) by encoding the text
labels = self.processor.tokenizer(text, padding="max_length",max_length=self.max_target_length).input_ids
# important: make sure that PAD tokens are ignored by the loss function
labels = [label if label != self.processor.tokenizer.pad_token_id else -100 for label in labels]
encoding = {"pixel_values": pixel_values.squeeze(), "labels": torch.tensor(labels)}
return encoding
and
```
feature_extractor = AutoFeatureExtractor.from_pretrained("google/vit-base-patch16-224-in21k")
decoder_tokenizer = AutoTokenizer.from_pretrained("bertin-project/bertin-roberta-base-spanish")
processor =TrOCRProcessor(feature_extractor=feature_extractor, tokenizer=decoder_tokenizer)
processor.save_pretrained('./processor')
processor = TrOCRProcessor.from_pretrained("./processor")
train_dataset = SpanishDataset(root_dir='/ShardDrives/MyDrive',
df=train_df,
processor=processor)
eval_dataset = SpanishDataset(root_dir='/ShardDrives/MyDrive',
df=test_df,
processor=processor)
```
I use this encoder and decoder:
```
model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained(
"google/vit-base-patch16-224-in21k", "bertin-project/bertin-roberta-base-spanish")
# set decoder config to causal lm
model.config.decoder.is_decoder = True
model.config.decoder.add_cross_attention = True
# set special tokens used for creating the decoder_input_ids from the labels
model.config.decoder_start_token_id = processor.tokenizer.cls_token_id
model.config.pad_token_id = processor.tokenizer.pad_token_id
# make sure vocab size is set correctly
model.config.vocab_size = model.config.decoder.vocab_size
# set beam search parameters
model.config.eos_token_id = processor.tokenizer.sep_token_id
model.config.max_length = 64
model.config.early_stopping = True
model.config.no_repeat_ngram_size = 3
model.config.length_penalty = 2.0
model.config.num_beams = 4
# ensure that randomly initialized cross-attention layers are added
assert model.config.decoder.is_decoder is True
assert model.config.decoder.add_cross_attention is True
```
I'm using cer as metric
```
def compute_metrics(pred):
labels_ids = pred.label_ids
pred_ids = pred.predictions
pred_str = processor.batch_decode(pred_ids, skip_special_tokens=True)
labels_ids[labels_ids == -100] = processor.tokenizer.pad_token_id
label_str = processor.batch_decode(labels_ids, skip_special_tokens=True)
cer = cer_metric.compute(predictions=pred_str, references=label_str)
return {"cer": cer}
```
and the training code is:
```
training_args = Seq2SeqTrainingArguments(
evaluation_strategy="epoch",
learning_rate=2e-4,
per_device_train_batch_size=2,
per_device_eval_batch_size=2,
weight_decay=0.01,
save_total_limit=3,
num_train_epochs=100,
output_dir="./",
predict_with_generate=True,
)
# Training
trainer = Seq2SeqTrainer(
model=model,
tokenizer=processor.feature_extractor,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
data_collator=default_data_collator
)
trainer.train()
```
But the Cer is bad (> 0.9)
This is an output for a dataset element
```
train_dataset[0]
{'pixel_values': tensor([[[ 0.9922, 1.0000, 1.0000, ..., -1.0000, -1.0000, -1.0000],
[ 0.9843, 0.9922, 0.9922, ..., -1.0000, -1.0000, -1.0000],
[ 0.9843, 0.9922, 0.9922, ..., -1.0000, -1.0000, -1.0000],
...,
[-1.0000, -1.0000, -1.0000, ..., -1.0000, -1.0000, -1.0000],
[-1.0000, -1.0000, -1.0000, ..., -1.0000, -1.0000, -1.0000],
[-1.0000, -1.0000, -1.0000, ..., -1.0000, -1.0000, -1.0000]],
[[ 0.9922, 1.0000, 1.0000, ..., -1.0000, -1.0000, -1.0000],
[ 0.9843, 0.9922, 0.9922, ..., -1.0000, -1.0000, -1.0000],
[ 0.9843, 0.9922, 0.9922, ..., -1.0000, -1.0000, -1.0000],
...,
[-1.0000, -1.0000, -1.0000, ..., -1.0000, -1.0000, -1.0000],
[-1.0000, -1.0000, -1.0000, ..., -1.0000, -1.0000, -1.0000],
[-1.0000, -1.0000, -1.0000, ..., -1.0000, -1.0000, -1.0000]],
[[ 0.9922, 1.0000, 1.0000, ..., -1.0000, -1.0000, -1.0000],
[ 0.9843, 0.9922, 0.9922, ..., -1.0000, -1.0000, -1.0000],
[ 0.9843, 0.9922, 0.9922, ..., -1.0000, -1.0000, -1.0000],
...,
[-1.0000, -1.0000, -1.0000, ..., -1.0000, -1.0000, -1.0000],
[-1.0000, -1.0000, -1.0000, ..., -1.0000, -1.0000, -1.0000],
[-1.0000, -1.0000, -1.0000, ..., -1.0000, -1.0000, -1.0000]]]),
'labels': tensor([ 0, 1323, 344, 2858, 11966, 66, 11507, 3298, 344, 14783,
66, 2, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100])}
```
I have an error, but i don't know where, so any help or advice is welcome.
Thank very much for everything
### Expected behavior
I expect to train a handwrite recognition system for spanish using TrOCR
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23700/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23700/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23699
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23699/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23699/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23699/events
|
https://github.com/huggingface/transformers/pull/23699
| 1,722,388,388 |
PR_kwDOCUB6oc5RKdd4
| 23,699 |
Skip `TFCvtModelTest::test_keras_fit_mixed_precision` for now
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Here is the full error when running\r\n```bash\r\npython3 -m pytest -v tests/models/cvt/test_modeling_tf_cvt.py::TFCvtModelTest::test_keras_fit_mixed_precision\r\n```\r\n\r\n## Error\r\n```bash\r\nself = <tests.models.cvt.test_modeling_tf_cvt.TFCvtModelTest testMethod=test_keras_fit_mixed_precision>\r\n\r\n def test_keras_fit_mixed_precision(self):\r\n policy = tf.keras.mixed_precision.Policy(\"mixed_float16\")\r\n tf.keras.mixed_precision.set_global_policy(policy)\r\n> super().test_keras_fit()\r\n\r\ntests/models/cvt/test_modeling_tf_cvt.py:192: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\ntests/test_modeling_tf_common.py:1585: in test_keras_fit\r\n history1 = model.fit(\r\n/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:70: in error_handler\r\n raise e.with_traceback(filtered_tb) from None\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\nself = <transformers.models.cvt.modeling_tf_cvt.TFCvtForImageClassification object at 0x7f4c0b7f10a0>\r\ndata = {'labels': <tf.Tensor: shape=(13,), dtype=int32, numpy=array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int32)>, '...,\r\n [0.38049886, 0.97876924, 0.96599656, ..., 0.5474588 ,\r\n 0.8447144 , 0.1995452 ]]]], dtype=float32)>}\r\n\r\n def train_step(self, data):\r\n \"\"\"\r\n A modification of Keras's default `train_step` that correctly handles matching outputs to labels for our models\r\n and supports directly training on the loss output head. In addition, it ensures input keys are copied to the\r\n labels where appropriate. It will also copy label keys into the input dict when using the dummy loss, to ensure\r\n that they are available to the model during the forward pass.\r\n \"\"\"\r\n \r\n # We hardcode the most common renamings; models with weirder names can set `self._label_to_output_map`\r\n arg_names = list(dict(inspect.signature(self.call).parameters).keys())\r\n label_kwargs = find_labels(self.__class__)\r\n label_to_output = self.get_label_to_output_name_mapping()\r\n output_to_label = {val: key for key, val in label_to_output.items()}\r\n if not self._using_dummy_loss and parse(tf.__version__) < parse(\"2.11.0\"):\r\n # Newer TF train steps leave this out\r\n data = data_adapter.expand_1d(data)\r\n x, y, sample_weight = data_adapter.unpack_x_y_sample_weight(data)\r\n # If the inputs are mutable dictionaries, make a shallow copy of them because we will modify\r\n # them during input/label pre-processing. This avoids surprising the user by wrecking their data.\r\n # In addition, modifying mutable Python inputs makes XLA compilation impossible.\r\n if isinstance(x, dict):\r\n x = x.copy()\r\n if isinstance(y, dict):\r\n y = y.copy()\r\n \r\n # When using a dummy loss, we ensure that separate labels are copied to the correct model arguments,\r\n # if those keys are not already present in the input dict\r\n if self._using_dummy_loss and y is not None:\r\n # If y is a tensor and the model only has one label-like input, map y to that input\r\n if len(label_kwargs) == 1 and isinstance(y, tf.Tensor):\r\n if isinstance(x, tf.Tensor):\r\n x = {arg_names[0]: x}\r\n label_kwarg = next(iter(label_kwargs))\r\n if label_kwarg not in x:\r\n x[label_kwarg] = y\r\n # Otherwise, copy keys from y to x as long as they weren't already present in x\r\n elif isinstance(y, dict):\r\n if isinstance(x, tf.Tensor):\r\n x = {arg_names[0]: x}\r\n for key, val in y.items():\r\n if key in arg_names and key not in x:\r\n x[key] = val\r\n elif output_to_label.get(key, None) in arg_names and key not in x:\r\n x[output_to_label[key]] = val\r\n if y is None:\r\n y = {key: val for key, val in x.items() if key in label_kwargs}\r\n if not y and not self._using_dummy_loss:\r\n raise ValueError(\"Could not find label column(s) in input dict and no separate labels were provided!\")\r\n \r\n if isinstance(y, dict):\r\n # Rename labels at this point to match output heads\r\n y = {label_to_output.get(key, key): val for key, val in y.items()}\r\n \r\n # Run forward pass.\r\n with tf.GradientTape() as tape:\r\n if self._using_dummy_loss and \"return_loss\" in arg_names:\r\n y_pred = self(x, training=True, return_loss=True)\r\n else:\r\n y_pred = self(x, training=True)\r\n if self._using_dummy_loss:\r\n loss = self.compiled_loss(y_pred.loss, y_pred.loss, sample_weight, regularization_losses=self.losses)\r\n else:\r\n loss = None\r\n \r\n # This next block matches outputs to label keys. Tensorflow's standard method for doing this\r\n # can get very confused if any of the keys contain nested values (e.g. lists/tuples of Tensors)\r\n if isinstance(y, dict) and len(y) == 1:\r\n if list(y.keys())[0] in y_pred.keys():\r\n y_pred = y_pred[list(y.keys())[0]]\r\n elif list(y_pred.keys())[0] == \"loss\":\r\n y_pred = y_pred[1]\r\n else:\r\n y_pred = y_pred[0]\r\n _, y = y.popitem()\r\n elif isinstance(y, dict):\r\n # If the labels are a dict, match keys from the output by name\r\n y_pred = {key: val for key, val in y_pred.items() if key in y}\r\n elif isinstance(y, tuple) or isinstance(y, list):\r\n # If the labels are a tuple/list, match keys to the output by order, skipping the loss.\r\n if list(y_pred.keys())[0] == \"loss\":\r\n y_pred = y_pred.to_tuple()[1:]\r\n else:\r\n y_pred = y_pred.to_tuple()\r\n y_pred = y_pred[: len(y)] # Remove unused fields in case those cause problems\r\n else:\r\n # If the labels are a single tensor, match them to the first non-loss tensor in the output\r\n if list(y_pred.keys())[0] == \"loss\":\r\n y_pred = y_pred[1]\r\n else:\r\n y_pred = y_pred[0]\r\n \r\n if loss is None:\r\n loss = self.compiled_loss(y, y_pred, sample_weight, regularization_losses=self.losses)\r\n \r\n # Run backwards pass.\r\n> self.optimizer.minimize(loss, self.trainable_variables, tape=tape)\r\nE tensorflow.python.framework.errors_impl.UnknownError: Failed to determine best cudnn convolution algorithm for:\r\nE %cudnn-conv.6 = (f16[1,3,3,96]{3,2,1,0}, u8[0]{0}) custom-call(f16[1,5,5,1248]{3,2,1,0} %bitcast.46, f16[96,3,3,16]{3,2,1,0} %transpose.3), window={size=3x3}, dim_labels=b01f_o01i->b01f, feature_group_count=96, custom_call_target=\"__cudnn$convForward\", metadata={op_type=\"Conv2DBackpropFilter\" op_name=\"gradients/Conv2D_grad/Conv2DBackpropFilter\" source_file=\"/usr/local/lib/python3.8/dist-packages/keras/layers/convolutional/base_conv.py\" source_line=286}, backend_config=\"{\\\"conv_result_scale\\\":1,\\\"activation_mode\\\":\\\"0\\\",\\\"side_input_scale\\\":0}\"\r\nE \r\nE Original error: UNKNOWN: CUDNN_STATUS_BAD_PARAM\r\nE in tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(3588): 'op' CUDNN_BACKEND_OPERATION: cudnnFinalize Failed\r\nE \r\nE To ignore this failure and try to use a fallback algorithm (which may have suboptimal performance), use XLA_FLAGS=--xla_gpu_strict_conv_algorithm_picker=false. Please also file a bug for the root cause of failing autotuning. [Op:__inference___backward__jit_compiled_convolution_op_11189_11200]\r\n\r\nsrc/transformers/modeling_tf_utils.py:1611: UnknownError\r\n```",
"_The documentation is not available anymore as the PR was closed or merged._",
"Ugh, this seems like a nightmare upstream issue alright - I think it's the best use of our time to just leave it unless it starts affecting multiple models, or if we start seeing it outside of the mixed_precision code paths.",
"Actually, it is mixed precision with training. I didn't see any other TF model having this test. Good for me to ignore it!"
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
#23339 uses TF 2.12 with CUDA 11.8 (and CUDNN 8700). With those, the test
```bash
tests/models/cvt/test_modeling_tf_cvt.py::TFCvtModelTest::test_keras_fit_mixed_precision
```
gets
```bash
tensorflow.python.framework.errors_impl.UnknownError: Failed to determine best cudnn convolution algorithm for:
```
and affect other two tests
```bash
FAILED tests/models/cvt/test_modeling_tf_cvt.py::TFCvtModelTest::test_pt_tf_model_equivalence - AssertionError: 0.007014513 not less than or equal to 1e-05 : outputs.last_hidden_state: Difference between torch and tf is 0.00701451301574707 (>= 1e-05).
FAILED tests/models/cvt/test_modeling_tf_cvt.py::TFCvtModelIntegrationTest::test_inference_image_classification_head - AssertionError: False is not true
```
**Those 2 tests will pass if `test_keras_fit_mixed_precision` is not run in the same pytest process.** (probably the GPU/CUDA/CUDNN is in bad states).
We will have to take a look and fix `test_keras_fit_mixed_precision`. But in the meantime, **let's skip it and not to affect the other 2 tests.**
@Rocketknight1 If you ever want to take a look this CUDA/CUDNN/TF issue. (Maybe better to open an issue in TF repo. but it may take 10 years to get a fix)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23699/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23699/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23699",
"html_url": "https://github.com/huggingface/transformers/pull/23699",
"diff_url": "https://github.com/huggingface/transformers/pull/23699.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23699.patch",
"merged_at": 1684867667000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23698
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23698/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23698/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23698/events
|
https://github.com/huggingface/transformers/pull/23698
| 1,722,371,591 |
PR_kwDOCUB6oc5RKZxj
| 23,698 |
[`Blip`] Fix blip doctest
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes the current Blip Failing doctest:
```python
from PIL import Image
import requests
from transformers import AutoProcessor, BlipForQuestionAnswering
model = BlipForQuestionAnswering.from_pretrained("Salesforce/blip-vqa-base")
processor = AutoProcessor.from_pretrained("Salesforce/blip-vqa-base")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# training
text = "How many cats are in the picture?"
label = "2"
inputs = processor(images=image, text=text, return_tensors="pt")
labels = processor(text=label, return_tensors="pt").input_ids
inputs["labels"] = labels
outputs = model(**inputs)
loss = outputs.loss
loss.backward()
```
In https://github.com/huggingface/transformers/pull/23153 I removed the redundant tokens shifting but also removed the assignment of `decoder_input_ids` if they are set to `None`, which is needed for training
This PR applies also the same fix on TF Blip
cc @sgugger @ydshieh
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23698/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23698/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23698",
"html_url": "https://github.com/huggingface/transformers/pull/23698",
"diff_url": "https://github.com/huggingface/transformers/pull/23698.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23698.patch",
"merged_at": 1684859144000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23697
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23697/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23697/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23697/events
|
https://github.com/huggingface/transformers/issues/23697
| 1,722,369,138 |
I_kwDOCUB6oc5mqURy
| 23,697 |
Graphormer multi label classification label input format
|
{
"login": "techthiyanes",
"id": 25921035,
"node_id": "MDQ6VXNlcjI1OTIxMDM1",
"avatar_url": "https://avatars.githubusercontent.com/u/25921035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/techthiyanes",
"html_url": "https://github.com/techthiyanes",
"followers_url": "https://api.github.com/users/techthiyanes/followers",
"following_url": "https://api.github.com/users/techthiyanes/following{/other_user}",
"gists_url": "https://api.github.com/users/techthiyanes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/techthiyanes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/techthiyanes/subscriptions",
"organizations_url": "https://api.github.com/users/techthiyanes/orgs",
"repos_url": "https://api.github.com/users/techthiyanes/repos",
"events_url": "https://api.github.com/users/techthiyanes/events{/privacy}",
"received_events_url": "https://api.github.com/users/techthiyanes/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi!\r\n\r\nIt's basically a list of ints.\r\n\r\nYou can see an example of a graph with multiple labels with the [ogbg-molcpba dataset](https://huggingface.co/datasets/OGB/ogbg-molpcba). There is a detailed explanation of the types needed as inputs of graph classification in the [blog post on graph classification](https://huggingface.co/blog/graphml-classification) using transformers.\r\n\r\nCan you please tell me what added information you need? ",
"> Hi!\r\n> \r\n> It's basically a list of ints.\r\n> \r\n> You can see an example of a graph with multiple labels with the [ogbg-molcpba dataset](https://huggingface.co/datasets/OGB/ogbg-molpcba). There is a detailed explanation of the types needed as inputs of graph classification in the [blog post on graph classification](https://huggingface.co/blog/graphml-classification) using transformers.\r\n> \r\n> Can you please tell me what added information you need?\r\n\r\n\r\n\r\nWhile trying to train, I'm getting the error message of \r\n\r\nTypeError: _stack_dispatcher() got an unexpected keyword argument 'dim'.\r\n\r\nAt the same time, It is working for regression/Binary classification/multi class classification usecases.",
"Hi!\r\nCould you provide your full stack trace please?",
"> st of ints.\r\n\r\n\r\nPlease find below stack trace:\r\n\r\n/usr/local/lib/python3.10/dist-packages/transformers/optimization.py:407: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning\r\n warnings.warn(\r\n╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮\r\n│ in <cell line: 1>:1 │\r\n│ │\r\n│ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1664 in train │\r\n│ │\r\n│ 1661 │ │ inner_training_loop = find_executable_batch_size( │\r\n│ 1662 │ │ │ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size │\r\n│ 1663 │ │ ) │\r\n│ ❱ 1664 │ │ return inner_training_loop( │\r\n│ 1665 │ │ │ args=args, │\r\n│ 1666 │ │ │ resume_from_checkpoint=resume_from_checkpoint, │\r\n│ 1667 │ │ │ trial=trial, │\r\n│ │\r\n│ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1909 in _inner_training_loop │\r\n│ │\r\n│ 1906 │ │ │ │ rng_to_sync = True │\r\n│ 1907 │ │ │ │\r\n│ 1908 │ │ │ step = -1 │\r\n│ ❱ 1909 │ │ │ for step, inputs in enumerate(epoch_iterator): │\r\n│ 1910 │ │ │ │ total_batched_samples += 1 │\r\n│ 1911 │ │ │ │ if rng_to_sync: │\r\n│ 1912 │ │ │ │ │ self._load_rng_state(resume_from_checkpoint) │\r\n│ │\r\n│ /usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:633 in __next__ │\r\n│ │\r\n│ 630 │ │ │ if self._sampler_iter is None: │\r\n│ 631 │ │ │ │ # TODO(https://github.com/pytorch/pytorch/issues/76750) │\r\n│ 632 │ │ │ │ self._reset() # type: ignore[call-arg] │\r\n│ ❱ 633 │ │ │ data = self._next_data() │\r\n│ 634 │ │ │ self._num_yielded += 1 │\r\n│ 635 │ │ │ if self._dataset_kind == _DatasetKind.Iterable and \\ │\r\n│ 636 │ │ │ │ │ self._IterableDataset_len_called is not None and \\ │\r\n│ │\r\n│ /usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:677 in _next_data │\r\n│ │\r\n│ 674 │ │\r\n│ 675 │ def _next_data(self): │\r\n│ 676 │ │ index = self._next_index() # may raise StopIteration │\r\n│ ❱ 677 │ │ data = self._dataset_fetcher.fetch(index) # may raise StopIteration │\r\n│ 678 │ │ if self._pin_memory: │\r\n│ 679 │ │ │ data = _utils.pin_memory.pin_memory(data, self._pin_memory_device) │\r\n│ 680 │ │ return data │\r\n│ │\r\n│ /usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py:54 in fetch │\r\n│ │\r\n│ 51 │ │ │ │ data = [self.dataset[idx] for idx in possibly_batched_index] │\r\n│ 52 │ │ else: │\r\n│ 53 │ │ │ data = self.dataset[possibly_batched_index] │\r\n│ ❱ 54 │ │ return self.collate_fn(data) │\r\n│ 55 │\r\n│ │\r\n│ /usr/local/lib/python3.10/dist-packages/transformers/models/graphormer/collating_graphormer.py:1 │\r\n│ 32 in __call__ │\r\n│ │\r\n│ 129 │ │ │ else: # binary classification │\r\n│ 130 │ │ │ │ batch[\"labels\"] = torch.from_numpy(np.concatenate([i[\"labels\"] for i in │\r\n│ 131 │ │ else: # multi task classification, left to float to keep the NaNs │\r\n│ ❱ 132 │ │ │ batch[\"labels\"] = torch.from_numpy(np.stack([i[\"labels\"] for i in features], │\r\n│ 133 │ │ │\r\n│ 134 │ │ return batch │\r\n│ 135 │\r\n│ in stack:179 │\r\n╰──────────────────────────────────────────────────────────────────────────────────────────────────╯\r\nTypeError: _stack_dispatcher() got an unexpected keyword argument 'dim'",
"Hi @techthiyanes ,\r\nCould you please provide the command you launched or a code snippet so I can make sure I'm working on the same thing as you?",
"Hi @clefourrier ,\r\n\r\nThank you for your time and response.\r\nPlease find below code snippet that i have tried where num_classes are not passed inside arguments as it's multi label classification.\r\n\r\n# -*- coding: utf-8 -*-\r\n\"\"\"Untitled334.ipynb\r\n\r\nAutomatically generated by Colaboratory.\r\n\r\nOriginal file is located at\r\n https://colab.research.google.com/drive/1Xnz4vI75fkIdQVT6wKKiuDipoQzO4uZ1\r\n\"\"\"\r\n\r\n!pip install -q -U datasets transformers Cython accelerate\r\n\r\n!pip install -q -U matplotlib networkx\r\n\r\nfrom transformers.utils import is_cython_available\r\nprint(\"Cython is installed:\", is_cython_available())\r\n\r\nfrom datasets import load_dataset \r\ndataset = load_dataset(\"OGB/ogbg-molpcba\")\r\ndataset['train'] = dataset['train'].select(list(range(1000)))\r\ndataset['test'] = dataset['test'].select(list(range(100)))\r\ndataset['validation'] = dataset['validation'].select(list(range(100)))\r\nfrom datasets import load_metric\r\nmetric = load_metric(\"accuracy\")\r\nimport networkx as nx\r\nimport matplotlib.pyplot as plt\r\n# We want to plot the first train graph\r\ngraph = dataset[\"train\"][0]\r\nedges = graph[\"edge_index\"]\r\nnum_edges = len(edges[0])\r\nnum_nodes = graph[\"num_nodes\"]\r\n# Conversion to networkx format\r\nG = nx.Graph()\r\nG.add_nodes_from(range(num_nodes))\r\nG.add_edges_from([(edges[0][i], edges[1][i]) for i in range(num_edges)])\r\n# Plot\r\nnx.draw(G)\r\n\r\ndataset\r\n\r\nfrom transformers.models.graphormer.collating_graphormer import preprocess_item, GraphormerDataCollator\r\ndataset_processed = dataset.map(preprocess_item, batched=False)\r\n# split up training into training + validation\r\ntrain_ds = dataset_processed['train']\r\nval_ds = dataset_processed['validation']\r\n\r\nfrom transformers import GraphormerForGraphClassification\r\n\r\nmodel_checkpoint = \"clefourrier/graphormer-base-pcqm4mv2\" # pre-trained model from which to fine-tune\r\n\r\nmodel = GraphormerForGraphClassification.from_pretrained(\r\n model_checkpoint, \r\n # num_classes=2, Commenting due to multi label\r\n ignore_mismatched_sizes = True, # provide this in case you're planning to fine-tune an already fine-tuned checkpoint\r\n)\r\n\r\nfrom transformers import TrainingArguments, Trainer\r\ntraining_args = TrainingArguments(\r\n \"graph-classification\",\r\n logging_dir=\"graph-classification\",\r\n per_device_train_batch_size=64,\r\n per_device_eval_batch_size=64,\r\n auto_find_batch_size=True, # batch size can be changed automatically to prevent OOMs\r\n gradient_accumulation_steps=10,\r\n dataloader_num_workers=4, \r\n num_train_epochs=20,\r\n evaluation_strategy=\"epoch\",\r\n logging_strategy=\"epoch\",\r\n # push_to_hub=False,\r\n)\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=train_ds,\r\n eval_dataset=val_ds,\r\n data_collator=GraphormerDataCollator()\r\n)\r\n\r\ntrainer.train()\r\n\r\n!pip install -q -U datasets transformers Cython accelerate\r\n\r\n!pip install -q -U matplotlib networkx\r\n\r\nfrom transformers.utils import is_cython_available\r\nprint(\"Cython is installed:\", is_cython_available())\r\n\r\nfrom datasets import load_dataset \r\ndataset = load_dataset(\"OGB/ogbg-molpcba\")\r\ndataset['train'] = dataset['train'].select(list(range(1000)))\r\ndataset['test'] = dataset['test'].select(list(range(100)))\r\ndataset['validation'] = dataset['validation'].select(list(range(100)))\r\nfrom datasets import load_metric\r\nmetric = load_metric(\"accuracy\")\r\nimport networkx as nx\r\nimport matplotlib.pyplot as plt\r\n# We want to plot the first train graph\r\ngraph = dataset[\"train\"][0]\r\nedges = graph[\"edge_index\"]\r\nnum_edges = len(edges[0])\r\nnum_nodes = graph[\"num_nodes\"]\r\n# Conversion to networkx format\r\nG = nx.Graph()\r\nG.add_nodes_from(range(num_nodes))\r\nG.add_edges_from([(edges[0][i], edges[1][i]) for i in range(num_edges)])\r\n# Plot\r\nnx.draw(G)\r\n\r\ndataset\r\n\r\nfrom transformers.models.graphormer.collating_graphormer import preprocess_item, GraphormerDataCollator\r\ndataset_processed = dataset.map(preprocess_item, batched=False)\r\n# split up training into training + validation\r\ntrain_ds = dataset_processed['train']\r\nval_ds = dataset_processed['validation']\r\n\r\nfrom transformers import GraphormerForGraphClassification\r\n\r\nmodel_checkpoint = \"clefourrier/graphormer-base-pcqm4mv2\" # pre-trained model from which to fine-tune\r\n\r\nmodel = GraphormerForGraphClassification.from_pretrained(\r\n model_checkpoint, \r\n # num_classes=2, Commenting due to multi label\r\n ignore_mismatched_sizes = True, # provide this in case you're planning to fine-tune an already fine-tuned checkpoint\r\n)\r\n\r\nfrom transformers import TrainingArguments, Trainer\r\ntraining_args = TrainingArguments(\r\n \"graph-classification\",\r\n logging_dir=\"graph-classification\",\r\n per_device_train_batch_size=64,\r\n per_device_eval_batch_size=64,\r\n auto_find_batch_size=True, # batch size can be changed automatically to prevent OOMs\r\n gradient_accumulation_steps=10,\r\n dataloader_num_workers=4, \r\n num_train_epochs=20,\r\n evaluation_strategy=\"epoch\",\r\n logging_strategy=\"epoch\",\r\n # push_to_hub=False,\r\n)\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=train_ds,\r\n eval_dataset=val_ds,\r\n data_collator=GraphormerDataCollator()\r\n)\r\n\r\ntrainer.train()\r\n\r\nThanks\r\nThiya",
"Ok, thank you very much for reporting! \r\n\r\nI can reproduce your issue, I'll fix it asap",
"I fixed this problem in the PR above (now we need to wait for the fix to be merged, which will not be instantaneous). Thank you very much for reporting! :hugs: \r\n\r\nNote that for multi-label classification, you will also need to provide the correct number of labels (in this case 128) to `num_classes`, like so:\r\n```python\r\nmodel_checkpoint = \"clefourrier/graphormer-base-pcqm4mv2\" # pre-trained model from which to fine-tune\r\n\r\nmodel = GraphormerForGraphClassification.from_pretrained(\r\n model_checkpoint,\r\n num_classes=128, # HERE\r\n ignore_mismatched_sizes = True, # provide this in case you're planning to fine-tune an already fine-tuned checkpoint\r\n)\r\n```"
] | 1,684 | 1,685 | 1,685 |
NONE
| null |
### System Info
NA
### Who can help?
@clefourrier
Kindly share the input format for multi label classification specially on the label side.
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
NA
### Expected behavior
NA
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23697/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23697/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23696
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23696/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23696/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23696/events
|
https://github.com/huggingface/transformers/issues/23696
| 1,722,317,462 |
I_kwDOCUB6oc5mqHqW
| 23,696 |
Unable to download Google/vit-base-patch-16-224 / Getting 404 tepo not found error
|
{
"login": "rsadaphule",
"id": 8443170,
"node_id": "MDQ6VXNlcjg0NDMxNzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8443170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rsadaphule",
"html_url": "https://github.com/rsadaphule",
"followers_url": "https://api.github.com/users/rsadaphule/followers",
"following_url": "https://api.github.com/users/rsadaphule/following{/other_user}",
"gists_url": "https://api.github.com/users/rsadaphule/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rsadaphule/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rsadaphule/subscriptions",
"organizations_url": "https://api.github.com/users/rsadaphule/orgs",
"repos_url": "https://api.github.com/users/rsadaphule/repos",
"events_url": "https://api.github.com/users/rsadaphule/events{/privacy}",
"received_events_url": "https://api.github.com/users/rsadaphule/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"As the error indicates, `google/vit-base-patch-16-224` does not exist on the Hub. You can browse ViT models [here](https://huggingface.co/models?other=vit).",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,688 | 1,688 |
NONE
| null |
### System Info
I am running the following code on Google Colab to download pretrained Vision Transformer. I am authenticating with proper write access token. I get repo not found error.
Code:
from transformers import AutoModelForImageClassification, AutoFeatureExtractor
import torch
model_id = 'google/vit-base-patch-16-224'
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = AutoModelForImageClassification.from_pretrained(model_id, use_auth_token=WRITE_TOKEN_HF).to(device)
model.eval()
Env: Google Colab
Error:
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_errors.py](https://localhost:8080/#) in hf_raise_for_status(response, endpoint_name)
238 try:
--> 239 response.raise_for_status()
240 except HTTPError as e:
12 frames
HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/google/vit-base-patch-16-224/resolve/main/config.json
The above exception was the direct cause of the following exception:
RepositoryNotFoundError Traceback (most recent call last)
RepositoryNotFoundError: 404 Client Error. (Request ID: Root=1-646cdb2a-00843123602c48d04fc1bc45)
Repository Not Found for url: https://huggingface.co/google/vit-base-patch-16-224/resolve/main/config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If the repo is private, make sure you are authenticated.
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/transformers/utils/hub.py](https://localhost:8080/#) in cached_file(path_or_repo_id, filename, cache_dir, force_download, resume_download, proxies, use_auth_token, revision, local_files_only, subfolder, user_agent, _raise_exceptions_for_missing_entries, _raise_exceptions_for_connection_errors, _commit_hash)
422
423 except RepositoryNotFoundError:
--> 424 raise EnvironmentError(
425 f"{path_or_repo_id} is not a local folder and is not a valid model identifier "
426 "listed on '[https://huggingface.co/models'\nIf](https://huggingface.co/models'/nIf) this is a private repository, make sure to "
OSError: google/vit-base-patch-16-224 is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoModelForImageClassification, AutoFeatureExtractor
import torch
model_id = 'google/vit-base-patch-16-224'
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = AutoModelForImageClassification.from_pretrained(model_id, use_auth_token=WRITE_TOKEN_HF).to(device)
model.eval()
### Expected behavior
The model should be downloaded without error
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23696/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23696/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23694
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23694/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23694/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23694/events
|
https://github.com/huggingface/transformers/pull/23694
| 1,722,275,345 |
PR_kwDOCUB6oc5RKEkC
| 23,694 |
Fix a `BridgeTower` test
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the fix!"
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
A fix is required after #23029.
So far on main, an error is given
```bash
tests/models/bridgetower/test_modeling_bridgetower.py::BridgeTowerModelTrainingTest::test_training
(line 656) AssertionError: unexpectedly None : Gradients should not be None - got None for bridgetower.cross_modal_image_layers.1.attention.self.query.weight
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23694/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23694/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23694",
"html_url": "https://github.com/huggingface/transformers/pull/23694",
"diff_url": "https://github.com/huggingface/transformers/pull/23694.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23694.patch",
"merged_at": 1684855977000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23693
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23693/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23693/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23693/events
|
https://github.com/huggingface/transformers/issues/23693
| 1,722,228,180 |
I_kwDOCUB6oc5mpx3U
| 23,693 |
ZeRO 3 error: expected the next 4 parameters in the parameter fetch queue to be ... but got ()
|
{
"login": "dcaffo98",
"id": 54015844,
"node_id": "MDQ6VXNlcjU0MDE1ODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/54015844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dcaffo98",
"html_url": "https://github.com/dcaffo98",
"followers_url": "https://api.github.com/users/dcaffo98/followers",
"following_url": "https://api.github.com/users/dcaffo98/following{/other_user}",
"gists_url": "https://api.github.com/users/dcaffo98/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dcaffo98/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dcaffo98/subscriptions",
"organizations_url": "https://api.github.com/users/dcaffo98/orgs",
"repos_url": "https://api.github.com/users/dcaffo98/repos",
"events_url": "https://api.github.com/users/dcaffo98/events{/privacy}",
"received_events_url": "https://api.github.com/users/dcaffo98/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @dcaffo98, it'd be the best to file this directly with Deepspeed https://github.com/microsoft/DeepSpeed/issues since the issue is on the Deepspeed side.\r\n\r\nIn general such issues relate to code that changes the model after it was initialized, but there are many complex nuanced situations so it's best to talk to the DS developers directly.",
"I've filed the issue to the DS team as well. It may be worth noting that the error happens right after the first detected OVERFLOW in the run. However, multiple overflows occurred during the previous 24h of training (before resuming from the checkpoint).",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,688 | 1,688 |
NONE
| null |
### System Info
- `transformers` version: 4.27.4
- Platform: Linux-5.4.0-107-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.13.3
- PyTorch version (GPU?): 2.0.0+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: distributed with 2 NVIDIA RTX A5000 GPUs
### Who can help?
@stas00 may be the more suited for this since the issue is probably related to deepspeed
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Currently, I'm struggling to make a reproducible script, as the errors happens suddenly during training with ZeRO 3 stage activated and I'm using a custom dataset. The task is a contrastive loss pertaining. The backbone is the [GLPN's](https://huggingface.co/docs/transformers/model_doc/glpn) encoder model, followed by a custom Attention Pooling module. The parameters causing the issues
Deepspeed version is `0.9.1`
The issue may be related to [this](https://github.com/microsoft/DeepSpeed/issues/1938), although the stack trace is not identical
The error shows only when resuming from a checkpoint (`resuming_from_checkpoint`=/path/to/checkpoint).
I'm attaching the log output (`error.txt`), along with the deepspeed ZeRO 3 configuration (`config_adam_zero3.txt`) I'm using, plus the custom model implementation (`modeling_custom_apr.txt`).
[config_adam_zero3.txt](https://github.com/huggingface/transformers/files/11545179/config_adam_zero3.txt)
[error.txt](https://github.com/huggingface/transformers/files/11545180/error.txt)
[modeling_custom_apr.txt](https://github.com/huggingface/transformers/files/11545354/modeling_custom_apr.txt)
This is the last part of the log where the error shows up
```
5[2023-05-23 14:02:25,781] [INFO] [logging.py:96:log_dist] [Rank 0] step=14290, skipped=17, lr=[0.00014992267618019753], mom=[(0.9, 0.999)]
[2023-05-23 14:02:25,783] [INFO] [timer.py:199:stop] epoch=0/micro_step=2070/global_step=2070, RunningAvgSamplesPerSec=8.340844178398823, CurrSamplesPerSec=8.091999012978865, MemAllocated=0.4GB, MaxMemAllocated=19.03GB
{'loss': 1.0438, 'learning_rate': 0.00014992267618019753, 'epoch': 3.68}
[2023-05-23 14:02:36,757] [INFO] [loss_scaler.py:188:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 32768, but hysteresis is 2. Reducing hysteresis to 1
%|▍ | 14287/305600 [3:34:27<454:15:14, 5.61s/it]
5%|▍ | 14288/305600 [3:34:33<467:44:45, 5.78s/it]
5%|▍ | 14289/305600 [3:34:38<455:08:12, 5.62s/it]
5%|▍ | 14290/305600 [3:34:43<443:40:08, 5.48s/it]
5%|▍ | 14290/305600 [3:34:43<443:40:08, 5.48s/it]
5%|▍ | 14291/305600 [3:34:49<448:35:16, 5.54s/it]
5%|▍ | 14292/305600 [3:34:54<442:30:06, 5.47s/it]Traceback (most recent call last):
File "/mnt/beegfs/scratch/dcaffagni/runs/clpt_gpu_2_lr_154_cos_10k_wu/maticad_side/train.py", line 96, in <module>
train_out = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/transformers/trainer.py", line 1633, in train
return inner_training_loop(
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/transformers/trainer.py", line 1902, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/transformers/trainer.py", line 2661, in training_step
loss = self.deepspeed.backward(loss)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 1796, in backward
self.optimizer.backward(loss, retain_graph=retain_graph)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/runtime/zero/stage3.py", line 1923, in backward
self.loss_scaler.backward(loss.float(), retain_graph=retain_graph)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 62, in backward
scaled_loss.backward(retain_graph=retain_graph)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/torch/_tensor.py", line 487, in backward
torch.autograd.backward(
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/torch/autograd/__init__.py", line 200, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/torch/autograd/function.py", line 274, in apply
return user_fn(self, *args)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/runtime/zero/parameter_offload.py", line 169, in backward
ctx.pre_backward_function(ctx.module)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/runtime/zero/parameter_offload.py", line 419, in _run_before_backward_function
self.pre_sub_module_backward_function(sub_module)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/runtime/zero/parameter_offload.py", line 500, in pre_sub_module_backward_function
param_coordinator.fetch_sub_module(sub_module)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
Traceback (most recent call last):
File "/mnt/beegfs/scratch/dcaffagni/runs/clpt_gpu_2_lr_154_cos_10k_wu/maticad_side/train.py", line 96, in <module>
train_out = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/transformers/trainer.py", line 1633, in train
return inner_training_loop(
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/transformers/trainer.py", line 1902, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/transformers/trainer.py", line 2661, in training_step
loss = self.deepspeed.backward(loss)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 1796, in backward
self.optimizer.backward(loss, retain_graph=retain_graph)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/runtime/zero/stage3.py", line 1923, in backward
self.loss_scaler.backward(loss.float(), retain_graph=retain_graph)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 62, in backward
scaled_loss.backward(retain_graph=retain_graph)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/torch/_tensor.py", line 487, in backward
torch.autograd.backward(
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/torch/autograd/__init__.py", line 200, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/torch/autograd/function.py", line 274, in apply
return user_fn(self, *args)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/runtime/zero/parameter_offload.py", line 169, in backward
ctx.pre_backward_function(ctx.module)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
[modeling_custom_apr.txt](https://github.com/huggingface/transformers/files/11545331/modeling_custom_apr.txt)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/runtime/zero/parameter_offload.py", line 419, in _run_before_backward_function
self.pre_sub_module_backward_function(sub_module)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/runtime/zero/parameter_offload.py", line 500, in pre_sub_module_backward_function
param_coordinator.fetch_sub_module(sub_module)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/runtime/zero/partitioned_param_coordinator.py", line 288, in fetch_sub_module
raise RuntimeError(
RuntimeError: tracing error at step 999:
module id: 921, training: True
expected the next 4 parameters in the parameter fetch queue to be ({'id': 'name=attn_pool.k_proj.bias id=915', 'status': 'AVAILABLE', 'numel': 512, 'ds_numel': 512, 'shape': (512,), 'ds_shape': (512,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {921}}, {'id': 'name=attn_pool.v_proj.bias id=919', 'status': 'AVAILABLE', 'numel': 512, 'ds_numel': 512, 'shape': (512,), 'ds_shape': (512,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {921}}, {'id': 'name=attn_pool.c_proj.bias id=921', 'status': 'AVAILABLE', 'numel': 512, 'ds_numel': 512, 'shape': (512,), 'ds_shape': (512,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {921}}, {'id': 'name=attn_pool.q_proj.bias id=917', 'status': 'AVAILABLE', 'numel': 512, 'ds_numel': 512, 'shape': (512,), 'ds_shape': (512,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {921}})
but got
().
```
### Expected behavior
After resuming from a checkpoint, the training should proceed fine, as it happens when training with the same setup from scratch.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23693/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23693/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23692
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23692/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23692/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23692/events
|
https://github.com/huggingface/transformers/issues/23692
| 1,722,160,679 |
I_kwDOCUB6oc5mphYn
| 23,692 |
Token Alignment
|
{
"login": "akesh1235",
"id": 125154243,
"node_id": "U_kgDOB3Wzww",
"avatar_url": "https://avatars.githubusercontent.com/u/125154243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akesh1235",
"html_url": "https://github.com/akesh1235",
"followers_url": "https://api.github.com/users/akesh1235/followers",
"following_url": "https://api.github.com/users/akesh1235/following{/other_user}",
"gists_url": "https://api.github.com/users/akesh1235/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akesh1235/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akesh1235/subscriptions",
"organizations_url": "https://api.github.com/users/akesh1235/orgs",
"repos_url": "https://api.github.com/users/akesh1235/repos",
"events_url": "https://api.github.com/users/akesh1235/events{/privacy}",
"received_events_url": "https://api.github.com/users/akesh1235/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Looks like this should go on the Datasets repo, t doesn't seem linked to Transformers :-)",
"> Looks like this should go on the Datasets repo, t doesn't seem linked to Transformers :-)\r\n\r\nCould you help me in anyway to fix this?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,688 | 1,688 |
NONE
| null |
`data`
> DatasetDict({
train: Dataset({
features: ['input', 'output'],
num_rows: 4500
})
test: Dataset({
features: ['input', 'output'],
num_rows: 500
})
})
**# input (in-correct sentence)**
`data['train'][0]['input']`
**>>** 'We are meet sunday 10am12pmET in Crown Heights Brooklyn New York'
**# output (correct sentence)**
`data['train'][0]['output']`
**>>** 'We meet Sundays 10am-12pmET in Crown Heights, Brooklyn, New York.'
**I Want to align the output tokens with input**
```
`# tokenize both inputs and targets
def tokenize_fn(batch):
# tokenize the input sequence first
# this populates input_ids, attention_mask, etc.
tokenized_inputs = tokenizer(
batch['input']
)
labels_batch = tokenizer.tokenize(batch['output']) # original targets
aligned_labels_batch = []
for i, labels in enumerate(labels_batch):
word_ids = tokenized_inputs[i].word_ids()
aligned_labels_batch.append(align_targets(labels, word_ids)) # align_targets is another user defined function which is been called here
# recall: the 'target' must be stored in key called 'labels'
tokenized_inputs['labels'] = aligned_labels_batch
return tokenized_inputs`
```
```
data.map(
tokenize_fn,
batched=True,
remove_columns=data['train'].column_names,
)
```
When this user defined function is mapped to every records of train and test batch am getting following error:
**1.** **raise DatasetTransformationNotAllowedError(
3457 "Using `.map` in batched mode on a dataset with attached indexes is allowed only if it doesn't create or remove existing examples. You can first run `.drop_index() to remove your index and then re-add it."**
**2.** **TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]]**
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23692/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23692/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23691
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23691/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23691/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23691/events
|
https://github.com/huggingface/transformers/pull/23691
| 1,722,141,428 |
PR_kwDOCUB6oc5RJnaX
| 23,691 |
Fix some docs what layerdrop does
|
{
"login": "zspo",
"id": 26846598,
"node_id": "MDQ6VXNlcjI2ODQ2NTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/26846598?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zspo",
"html_url": "https://github.com/zspo",
"followers_url": "https://api.github.com/users/zspo/followers",
"following_url": "https://api.github.com/users/zspo/following{/other_user}",
"gists_url": "https://api.github.com/users/zspo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zspo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zspo/subscriptions",
"organizations_url": "https://api.github.com/users/zspo/orgs",
"repos_url": "https://api.github.com/users/zspo/repos",
"events_url": "https://api.github.com/users/zspo/events{/privacy}",
"received_events_url": "https://api.github.com/users/zspo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks for your fix! Just have one syntax comment to propagate on all docs.\r\n\r\nThanks for your suggestion! I will add more commits later."
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
Fix some configuration docs what layerdrop does. Copy from `configuration_opt.py` and reset defaults which is same as init values.
Fixes # (issue)
https://github.com/huggingface/transformers/issues/23351
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger, @stevhliu, @MKhalusova and @sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23691/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23691/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23691",
"html_url": "https://github.com/huggingface/transformers/pull/23691",
"diff_url": "https://github.com/huggingface/transformers/pull/23691.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23691.patch",
"merged_at": 1684867840000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23690
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23690/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23690/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23690/events
|
https://github.com/huggingface/transformers/pull/23690
| 1,722,130,394 |
PR_kwDOCUB6oc5RJlA6
| 23,690 |
feat: add warning for using use_pretrained_backbone with from_pretrained
|
{
"login": "CreatlV",
"id": 6471651,
"node_id": "MDQ6VXNlcjY0NzE2NTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6471651?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CreatlV",
"html_url": "https://github.com/CreatlV",
"followers_url": "https://api.github.com/users/CreatlV/followers",
"following_url": "https://api.github.com/users/CreatlV/following{/other_user}",
"gists_url": "https://api.github.com/users/CreatlV/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CreatlV/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CreatlV/subscriptions",
"organizations_url": "https://api.github.com/users/CreatlV/orgs",
"repos_url": "https://api.github.com/users/CreatlV/repos",
"events_url": "https://api.github.com/users/CreatlV/events{/privacy}",
"received_events_url": "https://api.github.com/users/CreatlV/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23690). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds a warning when using `from_pretrained` and you also have `use_pretrained_backbone` enabled. `from_pretrained` will load its weights after the pretrained backbone is initialized meaning some weights may be unexpectedly overridden. I am not sure if this case is general enough to warrant a warning. The more general case would be if any weights are loaded before the `from_pretrained` weights. An additional feature would be to allow selectively loading weights using `from_pretrained`, but I understand if that use case is too esoteric.
```
model = DetrForObjectDetection.from_pretrained(
"facebook/detr-resnet-50",
num_labels=len(categories.keys()),
id2label=id2label,
label2id=label2id,
ignore_mismatched_sizes=True,
num_queries=20,
backbone="resnet50",
use_pretrained_backbone=True,
use_timm_backbone=True,
)
```
For the scenario above, the `use_pretrained_backbone` will not have any effect as the `facebook/detr-resnet-50` weights will take precedent.
## Who can review?
Perhaps this is the area of @sgugger or @amyeroberts
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23690/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23690/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23690",
"html_url": "https://github.com/huggingface/transformers/pull/23690",
"diff_url": "https://github.com/huggingface/transformers/pull/23690.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23690.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23689
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23689/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23689/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23689/events
|
https://github.com/huggingface/transformers/pull/23689
| 1,722,097,192 |
PR_kwDOCUB6oc5RJd0o
| 23,689 |
#23675 Registering Malay language
|
{
"login": "soongbren",
"id": 58584180,
"node_id": "MDQ6VXNlcjU4NTg0MTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/58584180?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/soongbren",
"html_url": "https://github.com/soongbren",
"followers_url": "https://api.github.com/users/soongbren/followers",
"following_url": "https://api.github.com/users/soongbren/following{/other_user}",
"gists_url": "https://api.github.com/users/soongbren/gists{/gist_id}",
"starred_url": "https://api.github.com/users/soongbren/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/soongbren/subscriptions",
"organizations_url": "https://api.github.com/users/soongbren/orgs",
"repos_url": "https://api.github.com/users/soongbren/repos",
"events_url": "https://api.github.com/users/soongbren/events{/privacy}",
"received_events_url": "https://api.github.com/users/soongbren/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Please only add translated files in that folder. There is no need to copy the whole Englih documentation.",
"translated some sections of the _toctree.yml file into Malay language"
] | 1,684 | 1,685 | 1,685 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23689/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23689/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23689",
"html_url": "https://github.com/huggingface/transformers/pull/23689",
"diff_url": "https://github.com/huggingface/transformers/pull/23689.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23689.patch",
"merged_at": 1685639847000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23688
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23688/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23688/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23688/events
|
https://github.com/huggingface/transformers/issues/23688
| 1,722,003,391 |
I_kwDOCUB6oc5mo6-_
| 23,688 |
LlamaForCausalLM generate() runtime error when top_p=0
|
{
"login": "tranhungnghiep",
"id": 4527536,
"node_id": "MDQ6VXNlcjQ1Mjc1MzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4527536?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tranhungnghiep",
"html_url": "https://github.com/tranhungnghiep",
"followers_url": "https://api.github.com/users/tranhungnghiep/followers",
"following_url": "https://api.github.com/users/tranhungnghiep/following{/other_user}",
"gists_url": "https://api.github.com/users/tranhungnghiep/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tranhungnghiep/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tranhungnghiep/subscriptions",
"organizations_url": "https://api.github.com/users/tranhungnghiep/orgs",
"repos_url": "https://api.github.com/users/tranhungnghiep/repos",
"events_url": "https://api.github.com/users/tranhungnghiep/events{/privacy}",
"received_events_url": "https://api.github.com/users/tranhungnghiep/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @gante ",
"Hi @tranhungnghiep -- there is indeed an issue, but not on allowing p=0 :) I will open a PR to fix it (feel free to check the solution there, after it gets linked on this issue)\r\n\r\nPlease note that setting `top_p=0` is the effectively the same as doing `do_sample=False` ⚠️ "
] | 1,684 | 1,686 | 1,686 |
NONE
| null |
### System Info
- `transformers` version: 4.29.2
- Platform: Linux-4.4.0-210-generic-x86_64-with-debian-stretch-sid
- Python version: 3.7.3
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): 2.5.0 (True)
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
This example follows and modifies the LLaMA documentation at: https://huggingface.co/docs/transformers/v4.29.1/model_doc/llama.
```python
from transformers import AutoTokenizer, LlamaForCausalLM
model = LlamaForCausalLM.from_pretrained(PATH_TO_CONVERTED_WEIGHTS).cuda()
tokenizer = AutoTokenizer.from_pretrained(PATH_TO_CONVERTED_TOKENIZER)
prompt = "Hey, are you conscious? Can you talk to me?"
inputs = tokenizer(prompt, return_tensors="pt").cuda()
# Generate
generate_ids = model.generate(
inputs.input_ids, max_length=1024,
do_sample=True,
temperature=0.6,
top_k=1000,
top_p=0.0, # cause error
repetition_penalty=(1.0 / 0.85),
)
tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
```
>RuntimeError: probability tensor contains either `inf`, `nan` or element < 0
### Expected behavior
`generate()` should check if `top_p==0` and disables it to avoid numerical error. This behavior is the same as when `top_p==1` and consistent with the documentation.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23688/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23688/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23687
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23687/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23687/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23687/events
|
https://github.com/huggingface/transformers/issues/23687
| 1,721,969,755 |
I_kwDOCUB6oc5moyxb
| 23,687 |
[RWKV] Inference memory leak unless use_cache=False is specified
|
{
"login": "rsbf",
"id": 6561598,
"node_id": "MDQ6VXNlcjY1NjE1OTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6561598?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rsbf",
"html_url": "https://github.com/rsbf",
"followers_url": "https://api.github.com/users/rsbf/followers",
"following_url": "https://api.github.com/users/rsbf/following{/other_user}",
"gists_url": "https://api.github.com/users/rsbf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rsbf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rsbf/subscriptions",
"organizations_url": "https://api.github.com/users/rsbf/orgs",
"repos_url": "https://api.github.com/users/rsbf/repos",
"events_url": "https://api.github.com/users/rsbf/events{/privacy}",
"received_events_url": "https://api.github.com/users/rsbf/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This is because you are not executing your forward pass under a `torch.no_grad`, so the gradient history blows up via the state of the outputs. Either do this or manually detach the states to avoid this memory use. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,688 | 1,688 |
NONE
| null |
### System Info
- `transformers` version: 4.29.2
- Platform: macOS-13.3.1-arm64-arm-64bit
- Python version: 3.8.16
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): 2.13.0-rc0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Memory usage rapidly climbs when running a stripped-down version of the RWKV overview example [here](https://huggingface.co/docs/transformers/model_doc/rwkv#overview), in a loop:
```python
from transformers import AutoTokenizer, RwkvModel
model = RwkvModel.from_pretrained("sgugger/rwkv-430M-pile")
tokenizer = AutoTokenizer.from_pretrained("sgugger/rwkv-430M-pile")
for _ in range(1000):
inputs = tokenizer("This is an example.", return_tensors="pt")
# Feed everything to the model
model(inputs["input_ids"]). # <--- memory leak
```
Passing `use_cache=False` to the forward step solves this, though it's not clear why, since the cached state 'should' be bounded to 5 entries.
### Expected behavior
Stable memory usage
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23687/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23687/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23686
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23686/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23686/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23686/events
|
https://github.com/huggingface/transformers/issues/23686
| 1,721,958,186 |
I_kwDOCUB6oc5mov8q
| 23,686 |
support for model.generate with assistant_model / model being load_in_8bit and PeftModel (LoRA)
|
{
"login": "achibb",
"id": 42097962,
"node_id": "MDQ6VXNlcjQyMDk3OTYy",
"avatar_url": "https://avatars.githubusercontent.com/u/42097962?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/achibb",
"html_url": "https://github.com/achibb",
"followers_url": "https://api.github.com/users/achibb/followers",
"following_url": "https://api.github.com/users/achibb/following{/other_user}",
"gists_url": "https://api.github.com/users/achibb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/achibb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/achibb/subscriptions",
"organizations_url": "https://api.github.com/users/achibb/orgs",
"repos_url": "https://api.github.com/users/achibb/repos",
"events_url": "https://api.github.com/users/achibb/events{/privacy}",
"received_events_url": "https://api.github.com/users/achibb/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @younesbelkada since @gante is on vacation.",
"Same thing here, did not realize this was because of 8-bit.\r\n\r\nFollowing the instructions in the blog post for assisted generation, I run into some issues. (FYI, both the longform_model and assistant_model are 8-bit finetuned versions of OPT, which is the exact same model used in the blog post.)\r\n\r\nFirst, when I do exactly what's in the post:\r\n```\r\n prompt = prompt + \"\\nAnswer:\"\r\n inputs = tokenizer([prompt], return_tensors=\"pt\").to(\"cuda\")\r\n outputs = longform_model.generate(**inputs, assistant_model=assistant_model)\r\n print(tokenizer.batch_decode(outputs, skip_special_tokens=True))\r\n```\r\n\r\nI get an error telling me that assisted generation requires `use_cache=True`. Hmm... weird, and the blog post didn't seem to need to use that argument, but okay, let's try it!\r\n\r\n```\r\n prompt = prompt + \"\\nAnswer:\"\r\n inputs = tokenizer([prompt], return_tensors=\"pt\").to(\"cuda\")\r\n outputs = longform_model.generate(**inputs, assistant_model=assistant_model, use_cache=True)\r\n print(tokenizer.batch_decode(outputs, skip_special_tokens=True))\r\n ```\r\n\r\nThen this happens:\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-10-e9645bbc79d4> in <module>\r\n----> 1 generate_from_prompt(\"Which is a species of fish? Tope or rope?\")\r\n\r\n<ipython-input-9-14fc80d284ea> in generate_from_prompt(prompt)\r\n 2 prompt = prompt + \"\\nAnswer:\"\r\n 3 inputs = tokenizer([prompt], return_tensors=\"pt\").to(\"cuda\")\r\n----> 4 outputs = longform_model.generate(**inputs, assistant_model=assistant_model, use_cache=True)\r\n 5 print(tokenizer.batch_decode(outputs, skip_special_tokens=True))\r\n\r\n/usr/lib/python3/dist-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)\r\n 25 def decorate_context(*args, **kwargs):\r\n 26 with self.clone():\r\n---> 27 return func(*args, **kwargs)\r\n 28 return cast(F, decorate_context)\r\n 29 \r\n\r\n~/.local/lib/python3.8/site-packages/transformers/generation/utils.py in generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, **kwargs)\r\n 1493 \r\n 1494 # 12. run assisted generate\r\n-> 1495 return self.assisted_decoding(\r\n 1496 input_ids,\r\n 1497 assistant_model=assistant_model,\r\n\r\n~/.local/lib/python3.8/site-packages/transformers/generation/utils.py in assisted_decoding(self, input_ids, assistant_model, do_sample, logits_processor, logits_warper, stopping_criteria, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, streamer, **model_kwargs)\r\n 4253 # 1.1. use the assistant model to obtain the next candidate logits\r\n 4254 if \"assistant_past_key_values\" in model_kwargs:\r\n-> 4255 prev_seq_len = model_kwargs[\"assistant_past_key_values\"][0][assistant_kv_indexing].shape[-2]\r\n 4256 # `new_token_len` can be 1 or 2 (next token in assistant + last token picked by the larger model)\r\n 4257 new_token_len = candidate_input_ids.shape[1] - prev_seq_len\r\n\r\nTypeError: 'NoneType' object is not subscriptable\r\n```\r\n\r\nI'm using bleeding edge version of Transformers, so I'm curious what I'm doing wrong here, or else maybe this is just a bug.",
"Hey @achibb @andersonbcdefg 👋 First of all, apologies for the delay :)\r\n\r\nI looked at your script, and the root cause for the exception on my end (with transformers v4.30, peft 0.3.0, and torch 2.0.0) was in the execution of the PEFT model -- it had caching set to False. Assisted generation needs caching on both models, so manually setting the config fixed it. This means the `use_cache` argument in `generate()` is not being piped correctly, for which I'll open a PR 🤗 \r\n\r\n______________________________________\r\n(temporary fix until the PR gets merged)\r\n```py\r\nimport torch\r\nfrom peft import PeftModel, PeftConfig\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\n\r\npeft_model_id = \"aari1995/GermanGPT_dolly_lora_1b5\"\r\nconfig = PeftConfig.from_pretrained(peft_model_id)\r\nassistant_model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path).to(\"cuda:0\")#, return_dict=True, load_in_8bit=True, device_map='auto')\r\n# Load the Lora model\r\nassistant_model = PeftModel.from_pretrained(assistant_model, peft_model_id).to(\"cuda:0\")\r\nassistant_model.config.use_cache = True\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\"malteos/bloom-6b4-clp-german-oasst-v0.1\",load_in_8bit=True, device_map='auto')\r\ntokenizer = AutoTokenizer.from_pretrained(\"malteos/bloom-6b4-clp-german-oasst-v0.1\")\r\n\r\nprompt = \"<|prompter|>Hallo<|endoftext|><|assistant|>\"\r\ninputs = tokenizer(prompt, return_tensors=\"pt\").to(\"cuda:0\")\r\noutputs = assistant_model.generate(**inputs, assistant_model=model, use_cache=True)\r\nprint(tokenizer.batch_decode(outputs, skip_special_tokens=False))\r\n```",
"@gante thanks, however I see a mistake on my side, I accidently switched the models. My generation model is a regular model and my assistant is a peft, so:\r\n\r\n```python\r\nimport torch\r\nfrom peft import PeftModel, PeftConfig\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\n\r\npeft_model_id = \"aari1995/GermanGPT_dolly_lora_1b5\"\r\nconfig = PeftConfig.from_pretrained(peft_model_id)\r\nassistant_model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path).to(\"cuda:0\")#, return_dict=True, load_in_8bit=True, device_map='auto')\r\n# Load the Lora model\r\nassistant_model = PeftModel.from_pretrained(assistant_model, peft_model_id).to(\"cuda:0\")\r\nassistant_model.config.use_cache = True\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\"malteos/bloom-6b4-clp-german-oasst-v0.1\",load_in_8bit=True, device_map='auto')\r\ntokenizer = AutoTokenizer.from_pretrained(\"malteos/bloom-6b4-clp-german-oasst-v0.1\")\r\n\r\nprompt = \"<|prompter|>Hallo<|endoftext|><|assistant|>\"\r\ninputs = tokenizer(prompt, return_tensors=\"pt\").to(\"cuda:0\")\r\noutputs = model.generate(**inputs, assistant_model=assistant_model, use_cache=True)\r\nprint(tokenizer.batch_decode(outputs, skip_special_tokens=False))\r\n```\r\nHere i get the error:\r\n\r\nRuntimeError: The expanded size of the tensor (11) must match the existing size (12) at non-singleton dimension 2. \r\nTarget sizes: [16, 0, 11]. Tensor sizes: [16, 1, 12]\r\n\r\n\r\n\r\nI hope your vacation was nice!",
"(@achibb looking at your newest comment to determine whether the issue needs to be reopened :) )",
"@achibb the root cause was the different class name, when the model gets loaded with PEFT. See the PR description in #24198 to see how it impacted the script you were trying to run :p \r\n\r\nAfter the PR gets merged, you will be able to run the script you pasted above!",
"It should be sorted now -- try running your script from `main` :)",
"perfect, works now! Thanks :)"
] | 1,684 | 1,686 | 1,686 |
NONE
| null |
### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.29.2
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes, RTX3090ti
- Using distributed or parallel set-up in script?: idk, I use accelerate via: device_map="auto", and load_in_8bit=True
### Who can help?
@gante
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Reproduce:
1: Setup load PEFT model as assistant_model (bloom with lora, load_in_8bit):
```python
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "aari1995/GermanGPT_dolly_lora_1b5"
config = PeftConfig.from_pretrained(peft_model_id)
assistant_model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path).to("cuda:0")#, return_dict=True, load_in_8bit=True, device_map='auto')
# Load the Lora model
assistant_model = PeftModel.from_pretrained(assistant_model, peft_model_id)
```
2: Load Bloom model (load_in_8bit, with or without lora):
```python
model = AutoModelForCausalLM.from_pretrained("malteos/bloom-6b4-clp-german-oasst-v0.1",load_in_8bit=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("malteos/bloom-6b4-clp-german-oasst-v0.1")
```
3. Generate using PeftModel as :
```python
prompt = "<|prompter|>Hallo<|endoftext|><|assistant|>"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda:0")
outputs = assistant_model.generate(**inputs, assistant_model=model,use_cache=True)
print(tokenizer.batch_decode(outputs, skip_special_tokens=False))
```
### Expected behavior
I expected to get an assisted generation, however it gets stuck, for example here:
generation/utils.py
-> 4253 prev_seq_len = model_kwargs["assistant_past_key_values"][0][assistant_kv_indexing].shape[-2]
'NoneType' object is not subscriptable
i suspect it maybe due to the fact that i use load_in_8bit=True and also use PeftModel.
Thanks!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23686/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23686/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23685
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23685/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23685/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23685/events
|
https://github.com/huggingface/transformers/pull/23685
| 1,721,820,324 |
PR_kwDOCUB6oc5RIh05
| 23,685 |
Add albert resources
|
{
"login": "elabongaatuo",
"id": 32382363,
"node_id": "MDQ6VXNlcjMyMzgyMzYz",
"avatar_url": "https://avatars.githubusercontent.com/u/32382363?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elabongaatuo",
"html_url": "https://github.com/elabongaatuo",
"followers_url": "https://api.github.com/users/elabongaatuo/followers",
"following_url": "https://api.github.com/users/elabongaatuo/following{/other_user}",
"gists_url": "https://api.github.com/users/elabongaatuo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elabongaatuo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elabongaatuo/subscriptions",
"organizations_url": "https://api.github.com/users/elabongaatuo/orgs",
"repos_url": "https://api.github.com/users/elabongaatuo/repos",
"events_url": "https://api.github.com/users/elabongaatuo/events{/privacy}",
"received_events_url": "https://api.github.com/users/elabongaatuo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #20055
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@stevhliu
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23685/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23685/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23685",
"html_url": "https://github.com/huggingface/transformers/pull/23685",
"diff_url": "https://github.com/huggingface/transformers/pull/23685.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23685.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23684
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23684/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23684/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23684/events
|
https://github.com/huggingface/transformers/pull/23684
| 1,721,707,221 |
PR_kwDOCUB6oc5RII9C
| 23,684 |
[`SAM`] Fixes pipeline and adds a dummy pipeline test
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
1- Fixes the automatic mask generation pipeline, currently on the main branch, the script below
```python
from transformers import pipeline
from PIL import Image
import requests
generator = pipeline("mask-generation", model="facebook/sam-vit-base", device=0)
img_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg"
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")
outputs = generator(raw_image, points_per_batch=64)
```
is broken
2- Adds a dummy pipeline test. I know the pipelines are already tested in tests/pipeline but these tests are quite slow. I thought adding a small dummy pipeline test is easier for future contributors to make sure they will not break the pipeline without having to run the entire pipeline testing suite for SAM
cc @ArthurZucker @ydshieh @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23684/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23684/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23684",
"html_url": "https://github.com/huggingface/transformers/pull/23684",
"diff_url": "https://github.com/huggingface/transformers/pull/23684.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23684.patch",
"merged_at": 1684856210000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23683
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23683/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23683/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23683/events
|
https://github.com/huggingface/transformers/pull/23683
| 1,721,706,347 |
PR_kwDOCUB6oc5RIIwJ
| 23,683 |
fix: use bool instead of uint8/byte in Deberta/DebertaV2/SEW-D to make it compatible with TensorRT
|
{
"login": "uchuhimo",
"id": 7040313,
"node_id": "MDQ6VXNlcjcwNDAzMTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7040313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/uchuhimo",
"html_url": "https://github.com/uchuhimo",
"followers_url": "https://api.github.com/users/uchuhimo/followers",
"following_url": "https://api.github.com/users/uchuhimo/following{/other_user}",
"gists_url": "https://api.github.com/users/uchuhimo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/uchuhimo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/uchuhimo/subscriptions",
"organizations_url": "https://api.github.com/users/uchuhimo/orgs",
"repos_url": "https://api.github.com/users/uchuhimo/repos",
"events_url": "https://api.github.com/users/uchuhimo/events{/privacy}",
"received_events_url": "https://api.github.com/users/uchuhimo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I also fix Deberta/SEW-D since they have the similar compatible issue.",
"> Could you add a very small snippet to reproduce this just to show the previous error make sure this actually works?\r\n\r\n@ArthurZucker Use the following code to export DebertaV2 to ONNX file:\r\n\r\n```python\r\n# This script exports DebertaV2 models from HuggingFace directly.\r\nimport gc\r\nimport torch\r\nfrom transformers import DebertaV2Model\r\nfrom transformers.activations import FastGELUActivation\r\nimport mock\r\n\r\n\r\ndef make_log_bucket_position(relative_pos, bucket_size, max_position):\r\n sign = torch.sign(relative_pos.float())\r\n mid = bucket_size // 2\r\n abs_pos = torch.where(\r\n (relative_pos < mid) & (relative_pos > -mid),\r\n torch.tensor(mid - 1).type_as(relative_pos),\r\n torch.abs(relative_pos),\r\n )\r\n log_pos = (\r\n torch.ceil(\r\n torch.log(abs_pos / mid)\r\n / torch.log(torch.tensor((max_position - 1) / mid))\r\n * (mid - 1)\r\n )\r\n + mid\r\n )\r\n bucket_pos = torch.where(\r\n abs_pos <= mid, relative_pos.type_as(log_pos), log_pos * sign\r\n )\r\n return bucket_pos\r\n\r\n\r\n# The following patch convert the input tensor of sign to float32 dtype,\r\n# since sign op in TensorRT does not support int dtype.\r\[email protected](\r\n \"transformers.models.deberta_v2.modeling_deberta_v2.make_log_bucket_position\",\r\n make_log_bucket_position,\r\n)\r\ndef export_deberta(model_name, max_seq_len, fast_gelu=False):\r\n model = DebertaV2Model.from_pretrained(model_name)\r\n\r\n gelu_tag = \"\"\r\n if fast_gelu:\r\n for layer in model.encoder.layer:\r\n layer.intermediate.intermediate_act_fn = FastGELUActivation()\r\n gelu_tag = \"-gelu-tanh\"\r\n\r\n input_ids = torch.zeros((1, max_seq_len // 2), dtype=torch.int)\r\n attention_mask = torch.zeros((1, max_seq_len // 2), dtype=torch.int)\r\n\r\n args = (\r\n input_ids,\r\n {\"attention_mask\": attention_mask},\r\n )\r\n\r\n base_model_name = model_name[model_name.rfind(\"/\") + 1 :]\r\n torch.onnx.export(\r\n model,\r\n args,\r\n f\"{base_model_name}{gelu_tag}.onnx\",\r\n input_names=[\"input_ids\", \"attention_mask\"],\r\n output_names=[\"last_hidden_state\"],\r\n opset_version=13,\r\n dynamic_axes={\r\n \"input_ids\": {0: \"batch\", 1: \"sequence\"},\r\n \"attention_mask\": {0: \"batch\", 1: \"sequence\"},\r\n \"last_hidden_state\": {0: \"batch\", 1: \"sequence\"},\r\n },\r\n )\r\n\r\n\r\nif __name__ == \"__main__\":\r\n export_deberta(\"microsoft/deberta-v3-large\", 4096, True)\r\n```\r\n\r\nUse TensorRT to convert ONNX file to engine file:\r\n\r\n```bash\r\ntrtexec --onnx=deberta-v3-large-gelu-tanh.onnx --explicitBatch --fp16 --shapes=input_ids:1x2048,attention_mask:1x2048 --memPoolSize=workspace:4096 --timingCacheFile=./deberta-v3-large-bs1-seq2048.cache --saveEngine=deberta-v3-large-bs1-seq2048 --buildOnly\r\n```\r\n\r\nYou will see the following error log:\r\n\r\n```\r\n[05/24/2023-03:42:37] [E] [TRT] ModelImporter.cpp:800: While parsing node number 73 [Cast -> \"/encoder/Cast_output_0\"]:\r\n[05/24/2023-03:42:37] [E] [TRT] ModelImporter.cpp:801: --- Begin node ---\r\ninput: \"/encoder/Mul_output_0\"\r\noutput: \"/encoder/Cast_output_0\"\r\nname: \"/encoder/Cast\"\r\nop_type: \"Cast\"\r\nattribute {\r\n name: \"to\"\r\n i: 2\r\n type: INT\r\n}\r\n\r\n[05/24/2023-03:42:37] [E] [TRT] ModelImporter.cpp:802: --- End node ---\r\n[05/24/2023-03:42:37] [E] [TRT] ModelImporter.cpp:804: ERROR: ModelImporter.cpp:239 In function parseNode:\r\n[8] Assertion failed: legalUINT8 && \"TensorRT does not support UINT8 types for intermediate tensors!\"\r\n[05/24/2023-03:42:37] [E] Failed to parse onnx file\r\n[05/24/2023-03:42:38] [I] Finished parsing network model. Parse time: 15.5136\r\n[05/24/2023-03:42:38] [E] Parsing model failed\r\n[05/24/2023-03:42:38] [E] Failed to create engine from model or file.\r\n[05/24/2023-03:42:38] [E] Engine set up failed\r\n```\r\n\r\nAfter used this MR, the error above will be gone."
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
TensorRT cannot accept onnx graph with uint8/byte intermediate tensors. This PR uses bool tensors instead of unit8/byte tensors to make the exported onnx file compatible with TensorRT.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23683/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23683/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23683",
"html_url": "https://github.com/huggingface/transformers/pull/23683",
"diff_url": "https://github.com/huggingface/transformers/pull/23683.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23683.patch",
"merged_at": 1684932464000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23682
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23682/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23682/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23682/events
|
https://github.com/huggingface/transformers/pull/23682
| 1,721,706,082 |
PR_kwDOCUB6oc5RIIsM
| 23,682 |
Fix SAM
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Forgot to mention: kudo to @younesbelkada for spotting this ",
"_The documentation is not available anymore as the PR was closed or merged._",
"The tol used in the tests changed in this PR are not changed in #23656, so nothing to be reverted back."
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
Some failures are introduced in #23656. I only checked the TF tests but not the PT tests. Sorry.
cc @Rocketknight1 : it's likely a hardware difference. Always nice if we can get the results from GCP VM, but I understand it makes your workflow a bit difficult.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23682/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23682/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23682",
"html_url": "https://github.com/huggingface/transformers/pull/23682",
"diff_url": "https://github.com/huggingface/transformers/pull/23682.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23682.patch",
"merged_at": 1684846118000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23681
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23681/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23681/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23681/events
|
https://github.com/huggingface/transformers/pull/23681
| 1,721,665,294 |
PR_kwDOCUB6oc5RH_g6
| 23,681 |
Fix sagemaker DP/MP
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hello, devs. I encountered this error on Google Colab (Python 3.10) and transformers 4.30.1.\r\n\r\n```\r\n❱ 3386 │ │ │ self.args.distributed_state is None and self.local_rank ! \r\n\r\nAttributeError: 'CLTrainer' object has no attribute 'local_rank'\r\n```\r\n\r\nIt looks like the line was added by this change.\r\n\r\nBy looking through the code, I suspect `self.local_rank` should be `self.args.local_rank`. I'm fairly new to this library, so apologies if my guess is wrong.",
"Indeed. Would you like to open a PR with the fix?",
"Sure, I can open a PR in a few days. But I'm actually pretty new to this repo, so please feel free to make a quick fix for that.",
"cc @muellerzr might be worth a quick fix.",
"Hi. I just opened #24297. "
] | 1,684 | 1,686 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes the broken sagemaker tests, proved it works.
Solves https://github.com/huggingface/transformers/issues/23390
Needs to be coordinated with the Accelerate pr as well
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23681/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23681/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23681",
"html_url": "https://github.com/huggingface/transformers/pull/23681",
"diff_url": "https://github.com/huggingface/transformers/pull/23681.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23681.patch",
"merged_at": 1684957869000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23679
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23679/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23679/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23679/events
|
https://github.com/huggingface/transformers/issues/23679
| 1,721,537,786 |
I_kwDOCUB6oc5mnJT6
| 23,679 |
How to check word ids for BartTokenizer?
|
{
"login": "akesh1235",
"id": 125154243,
"node_id": "U_kgDOB3Wzww",
"avatar_url": "https://avatars.githubusercontent.com/u/125154243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akesh1235",
"html_url": "https://github.com/akesh1235",
"followers_url": "https://api.github.com/users/akesh1235/followers",
"following_url": "https://api.github.com/users/akesh1235/following{/other_user}",
"gists_url": "https://api.github.com/users/akesh1235/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akesh1235/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akesh1235/subscriptions",
"organizations_url": "https://api.github.com/users/akesh1235/orgs",
"repos_url": "https://api.github.com/users/akesh1235/repos",
"events_url": "https://api.github.com/users/akesh1235/events{/privacy}",
"received_events_url": "https://api.github.com/users/akesh1235/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"As the error indicates, you need to use `BartTokenizerFast` to use this method.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,688 | 1,688 |
NONE
| null |
one single words is getting splitted to multiple tokens, and i tried to check word_ids by using **t.word_ids()**
i got an Error
**ValueError: word_ids() is not available when using non-fast tokenizers (e.g. instance of a `XxxTokenizerFast` class).**
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23679/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23679/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23676
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23676/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23676/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23676/events
|
https://github.com/huggingface/transformers/issues/23676
| 1,721,454,062 |
I_kwDOCUB6oc5mm03u
| 23,676 |
About Tokenizer
|
{
"login": "Yu-xm",
"id": 72803279,
"node_id": "MDQ6VXNlcjcyODAzMjc5",
"avatar_url": "https://avatars.githubusercontent.com/u/72803279?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yu-xm",
"html_url": "https://github.com/Yu-xm",
"followers_url": "https://api.github.com/users/Yu-xm/followers",
"following_url": "https://api.github.com/users/Yu-xm/following{/other_user}",
"gists_url": "https://api.github.com/users/Yu-xm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Yu-xm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yu-xm/subscriptions",
"organizations_url": "https://api.github.com/users/Yu-xm/orgs",
"repos_url": "https://api.github.com/users/Yu-xm/repos",
"events_url": "https://api.github.com/users/Yu-xm/events{/privacy}",
"received_events_url": "https://api.github.com/users/Yu-xm/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) for questions around the library as we keep issues for bugs and feature requests only.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,688 | 1,688 |
NONE
| null |
### System Info
### Who can help?
@ArthurZucker @younesbelkada @ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
tokenizer = BertTokenizer.from_pretrained(gpt2_path)
### Expected behavior
I see the following code:
tokenizer = BertTokenizer.from_pretrained(gpt2_path)
This code uses BertTokenizer to read GPT2 related files. What is the difference between the above code and the following two codes?
tokenizer = BertTokenizer.from_pretrained(bert_path)
tokenizer = AutoTokenizer.from_pretrained(gpt2_path)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23676/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23676/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23675
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23675/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23675/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23675/events
|
https://github.com/huggingface/transformers/issues/23675
| 1,721,345,103 |
I_kwDOCUB6oc5mmaRP
| 23,675 |
[i18n-ms, ISO 639-1] Translating docs to Malay
|
{
"login": "soongbren",
"id": 58584180,
"node_id": "MDQ6VXNlcjU4NTg0MTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/58584180?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/soongbren",
"html_url": "https://github.com/soongbren",
"followers_url": "https://api.github.com/users/soongbren/followers",
"following_url": "https://api.github.com/users/soongbren/following{/other_user}",
"gists_url": "https://api.github.com/users/soongbren/gists{/gist_id}",
"starred_url": "https://api.github.com/users/soongbren/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/soongbren/subscriptions",
"organizations_url": "https://api.github.com/users/soongbren/orgs",
"repos_url": "https://api.github.com/users/soongbren/repos",
"events_url": "https://api.github.com/users/soongbren/events{/privacy}",
"received_events_url": "https://api.github.com/users/soongbren/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false | null |
[] |
[
"I would like to work on the 'Get Started' and 'Tutorial' sections",
"Could you finish editing the template in the first comment on your issue?",
"hi, i have edited the template. Can I clarify, must I translate a few documents first before submitting a pull request? or do I submit a pull request when registering the new language?",
"No you can submit a pull request with just one new translated doc, thanks!"
] | 1,684 | 1,684 | null |
CONTRIBUTOR
| null |
<!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the Malay-speaking community 🌐 (currently 0 out of 267 complete)
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `ms` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `ms/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/).
## Get Started section
- [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180
- [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through)
- [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx).
## internal
## main_classes
## model_doc
## tasks
- [ ] asr.mdx
- [ ] audio_classification.mdx
- [ ] document_question_answering.mdx
- [ ] image_captioning.mdx
- [ ] image_classification.mdx
- [ ] language_modeling.mdx
- [ ] masked_language_modeling.mdx
- [ ] monocular_depth_estimation.mdx
- [ ] multiple_choice.mdx
- [ ] object_detection.mdx
- [ ] question_answering.mdx
- [ ] semantic_segmentation.mdx
- [ ] sequence_classification.mdx
- [ ] summarization.mdx
- [ ] text-to-speech.mdx
- [ ] token_classification.mdx
- [ ] translation.mdx
- [ ] video_classification.mdx
- [ ] zero_shot_image_classification.mdx
- [ ] zero_shot_object_detection.mdx
- [ ] _config.py
- [ ] _toctree.yml
- [ ] accelerate.mdx
- [ ] add_new_model.mdx
- [ ] add_new_pipeline.mdx
- [ ] add_tensorflow_model.mdx
- [ ] attention.mdx
- [ ] autoclass_tutorial.mdx
- [ ] benchmarks.mdx
- [ ] bertology.mdx
- [ ] big_models.mdx
- [ ] community.mdx
- [ ] contributing.md
- [ ] create_a_model.mdx
- [ ] custom_models.mdx
- [ ] custom_tools.mdx
- [ ] debugging.mdx
- [ ] fast_tokenizers.mdx
- [ ] geeneration_strategies.mdx
- [ ] glossary.mdx
- [ ] hpo_train.mdx
- [ ] index.mdx
- [ ] installation.mdx
- [ ] model_sharing.mdx
- [ ] model_summary.mdx
- [ ] multilingual.mdx
- [ ] notebooks.md
- [ ] pad_truncation.mdx
- [ ] perf_hardware.mdx
- [ ] perf_infer_cpu.mdx
- [ ] perf_infer_gpu_many.mdx
- [ ] perf_infer_gpu_one.mdx
- [ ] perf_infer_special.mdx
- [ ] perf_train_tpu.mdx
- [ ] perf_train_tpu_tf.mdx
- [ ] performance.mdx
- [ ] perplexity.mdx
- [ ] philosophy.mdx
- [ ] pipeline_tutorial.mdx
- [ ] pipeline_webserver.mdx
- [ ] pr_checks.mdx
- [ ] preprocessing.mdx
- [ ] quicktour.mdx
- [ ] run_scripts.mdx
- [ ] sagemaker.mdx
- [ ] serialization.mdx
- [ ] task_summary.mdx
- [ ] tasks_explained.mdx
- [ ] testing.mdx
- [ ] tf_xla.mdx
- [ ] tokenizer_summary.mdx
- [ ] torchscript.mdx
- [ ] training.mdx
- [ ] transformers_agents.mdx
- [ ] troubleshooting.mdx
<!--
Keep on adding more as you go 🔥
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23675/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23675/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/23674
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23674/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23674/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23674/events
|
https://github.com/huggingface/transformers/issues/23674
| 1,721,151,456 |
I_kwDOCUB6oc5mlq_g
| 23,674 |
custom stopping_critriea function doesn't receive logits scores (receives None instead)
|
{
"login": "Gandalf098",
"id": 11638396,
"node_id": "MDQ6VXNlcjExNjM4Mzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/11638396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Gandalf098",
"html_url": "https://github.com/Gandalf098",
"followers_url": "https://api.github.com/users/Gandalf098/followers",
"following_url": "https://api.github.com/users/Gandalf098/following{/other_user}",
"gists_url": "https://api.github.com/users/Gandalf098/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Gandalf098/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Gandalf098/subscriptions",
"organizations_url": "https://api.github.com/users/Gandalf098/orgs",
"repos_url": "https://api.github.com/users/Gandalf098/repos",
"events_url": "https://api.github.com/users/Gandalf098/events{/privacy}",
"received_events_url": "https://api.github.com/users/Gandalf098/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false |
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @gante ",
"Hey @Gandalf098 (the white, I hope ;) )\r\n\r\nBy default, the scores are not initialized and are kept as `None` (see [here](https://github.com/huggingface/transformers/blob/5fa0a1b23b6a79eb646636dbf9a22cb34ff48a74/src/transformers/generation/utils.py#L2308)). To enable score-keeping, you must pass `return_dict_in_generate=True, output_scores=True` to your `.generate()` call.\r\n\r\n____________________________________________\r\n\r\n```py\r\nimport torch\r\nfrom transformers import StoppingCriteriaList, BartForConditionalGeneration, BartTokenizer\r\n\r\ndef custom_stopping_criteria(input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:\r\n print(\"Scores:\", scores)\r\n return False\r\n\r\nstopping_criteria = StoppingCriteriaList([custom_stopping_criteria])\r\n\r\nmodel = BartForConditionalGeneration.from_pretrained(\"facebook/bart-large\", forced_bos_token_id=0)\r\ntok = BartTokenizer.from_pretrained(\"facebook/bart-large\")\r\n\r\nexample_english_phrase = \"UN Chief Says There Is No <mask> in Syria\"\r\nbatch = tok(example_english_phrase, return_tensors=\"pt\")\r\n\r\nmodel.generate(batch[\"input_ids\"], stopping_criteria=stopping_criteria, return_dict_in_generate=True, output_scores=True)\r\n```",
"Hi @gante and @Gandalf098,\r\n\r\nAccording to the `StoppingCriteria.__call__` [signature](https://huggingface.co/docs/transformers/v4.30.0/en/internal/generation_utils#transformers.StoppingCriteria) and to its docstring, `scores` is supposed to be a `torch.FloatTensor`.\r\n> scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head.\r\n\r\n\r\nIt makes sense to think of it as the **last** prediction scores of the language modeling head, meaning that the score-keeping here refers not to `score` (optional history of the prediction scores) but to `next_token_scores` (always available last prediction scores - at least for [greedy decoding](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/generation/utils.py#L2400-L2401), we should verify for other decoding strategies). \r\n\r\nIn that sense, I do think we should correct this point. What do you think @gante?\r\n",
"We might want to build some stopping criteria based on a sequence of tokens/sequence of scores, so this API is more general 🤗 \r\n\r\nWe do need better docs and/or input validation, though, to detect these issues in advance. It is my priority for this month (and I'm keeping this issue open so I don't forget to address this case)"
] | 1,684 | 1,688 | null |
NONE
| null |
### System Info
- `transformers` version: 4.29.2
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Reproduction Steps:
1. Initialize a BART model & its tokenizer (in my case it is facebook/bart-large)
2. Create a custom stopping_criteria function and add it to StoppingCriteriaList object
3. Run model.generate() with the your stopping criteria list as argument
Scores argument is always None
Example code:
```python
import torch
from transformers import StoppingCriteriaList, BartForConditionalGeneration, BartTokenizer
def custom_stopping_criteria(input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
print ("Scores:", scores)
return False
stopping_criteria = StoppingCriteriaList([custom_stopping_criteria])
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", forced_bos_token_id=0)
tok = BartTokenizer.from_pretrained("facebook/bart-large")
example_english_phrase = "UN Chief Says There Is No <mask> in Syria"
batch = tok(example_english_phrase, return_tensors="pt")
model.generate(batch["input_ids"], stopping_criteria=stopping_criteria)
```
The above code uses a stopping critriea that just prints the scores value when called (which prints None)
### Expected behavior
The expected behavior should be to have Scores logits populated with values instead of being None (values before or after softmax don't matter)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23674/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23674/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/23673
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23673/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23673/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23673/events
|
https://github.com/huggingface/transformers/pull/23673
| 1,721,089,056 |
PR_kwDOCUB6oc5RGAqp
| 23,673 |
Bump requests from 2.27.1 to 2.31.0 in /examples/research_projects/decision_transformer
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
Bumps [requests](https://github.com/psf/requests) from 2.27.1 to 2.31.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/psf/requests/releases">requests's releases</a>.</em></p>
<blockquote>
<h2>v2.31.0</h2>
<h2>2.31.0 (2023-05-22)</h2>
<p><strong>Security</strong></p>
<ul>
<li>
<p>Versions of Requests between v2.3.0 and v2.30.0 are vulnerable to potential
forwarding of <code>Proxy-Authorization</code> headers to destination servers when
following HTTPS redirects.</p>
<p>When proxies are defined with user info (<a href="https://user:pass@proxy:8080">https://user:pass@proxy:8080</a>), Requests
will construct a <code>Proxy-Authorization</code> header that is attached to the request to
authenticate with the proxy.</p>
<p>In cases where Requests receives a redirect response, it previously reattached
the <code>Proxy-Authorization</code> header incorrectly, resulting in the value being
sent through the tunneled connection to the destination server. Users who rely on
defining their proxy credentials in the URL are <em>strongly</em> encouraged to upgrade
to Requests 2.31.0+ to prevent unintentional leakage and rotate their proxy
credentials once the change has been fully deployed.</p>
<p>Users who do not use a proxy or do not supply their proxy credentials through
the user information portion of their proxy URL are not subject to this
vulnerability.</p>
<p>Full details can be read in our <a href="https://github.com/psf/requests/security/advisories/GHSA-j8r2-6x86-q33q">Github Security Advisory</a>
and <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-32681">CVE-2023-32681</a>.</p>
</li>
</ul>
<h2>v2.30.0</h2>
<h2>2.30.0 (2023-05-03)</h2>
<p><strong>Dependencies</strong></p>
<ul>
<li>
<p>⚠️ Added support for urllib3 2.0. ⚠️</p>
<p>This may contain minor breaking changes so we advise careful testing and
reviewing <a href="https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html">https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html</a>
prior to upgrading.</p>
<p>Users who wish to stay on urllib3 1.x can pin to <code>urllib3<2</code>.</p>
</li>
</ul>
<h2>v2.29.0</h2>
<h2>2.29.0 (2023-04-26)</h2>
<p><strong>Improvements</strong></p>
<ul>
<li>Requests now defers chunked requests to the urllib3 implementation to improve
standardization. (<a href="https://redirect.github.com/psf/requests/issues/6226">#6226</a>)</li>
<li>Requests relaxes header component requirements to support bytes/str subclasses. (<a href="https://redirect.github.com/psf/requests/issues/6356">#6356</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/psf/requests/blob/main/HISTORY.md">requests's changelog</a>.</em></p>
<blockquote>
<h2>2.31.0 (2023-05-22)</h2>
<p><strong>Security</strong></p>
<ul>
<li>
<p>Versions of Requests between v2.3.0 and v2.30.0 are vulnerable to potential
forwarding of <code>Proxy-Authorization</code> headers to destination servers when
following HTTPS redirects.</p>
<p>When proxies are defined with user info (<a href="https://user:pass@proxy:8080">https://user:pass@proxy:8080</a>), Requests
will construct a <code>Proxy-Authorization</code> header that is attached to the request to
authenticate with the proxy.</p>
<p>In cases where Requests receives a redirect response, it previously reattached
the <code>Proxy-Authorization</code> header incorrectly, resulting in the value being
sent through the tunneled connection to the destination server. Users who rely on
defining their proxy credentials in the URL are <em>strongly</em> encouraged to upgrade
to Requests 2.31.0+ to prevent unintentional leakage and rotate their proxy
credentials once the change has been fully deployed.</p>
<p>Users who do not use a proxy or do not supply their proxy credentials through
the user information portion of their proxy URL are not subject to this
vulnerability.</p>
<p>Full details can be read in our <a href="https://github.com/psf/requests/security/advisories/GHSA-j8r2-6x86-q33q">Github Security Advisory</a>
and <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-32681">CVE-2023-32681</a>.</p>
</li>
</ul>
<h2>2.30.0 (2023-05-03)</h2>
<p><strong>Dependencies</strong></p>
<ul>
<li>
<p>⚠️ Added support for urllib3 2.0. ⚠️</p>
<p>This may contain minor breaking changes so we advise careful testing and
reviewing <a href="https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html">https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html</a>
prior to upgrading.</p>
<p>Users who wish to stay on urllib3 1.x can pin to <code>urllib3<2</code>.</p>
</li>
</ul>
<h2>2.29.0 (2023-04-26)</h2>
<p><strong>Improvements</strong></p>
<ul>
<li>Requests now defers chunked requests to the urllib3 implementation to improve
standardization. (<a href="https://redirect.github.com/psf/requests/issues/6226">#6226</a>)</li>
<li>Requests relaxes header component requirements to support bytes/str subclasses. (<a href="https://redirect.github.com/psf/requests/issues/6356">#6356</a>)</li>
</ul>
<h2>2.28.2 (2023-01-12)</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/psf/requests/commit/147c8511ddbfa5e8f71bbf5c18ede0c4ceb3bba4"><code>147c851</code></a> v2.31.0</li>
<li><a href="https://github.com/psf/requests/commit/74ea7cf7a6a27a4eeb2ae24e162bcc942a6706d5"><code>74ea7cf</code></a> Merge pull request from GHSA-j8r2-6x86-q33q</li>
<li><a href="https://github.com/psf/requests/commit/302225334678490ec66b3614a9dddb8a02c5f4fe"><code>3022253</code></a> test on pypy 3.8 and pypy 3.9 on windows and macos (<a href="https://redirect.github.com/psf/requests/issues/6424">#6424</a>)</li>
<li><a href="https://github.com/psf/requests/commit/b639e66c816514e40604d46f0088fbceec1a5149"><code>b639e66</code></a> test on py3.12 (<a href="https://redirect.github.com/psf/requests/issues/6448">#6448</a>)</li>
<li><a href="https://github.com/psf/requests/commit/d3d504436ef0c2ac7ec8af13738b04dcc8c694be"><code>d3d5044</code></a> Fixed a small typo (<a href="https://redirect.github.com/psf/requests/issues/6452">#6452</a>)</li>
<li><a href="https://github.com/psf/requests/commit/2ad18e0e10e7d7ecd5384c378f25ec8821a10a29"><code>2ad18e0</code></a> v2.30.0</li>
<li><a href="https://github.com/psf/requests/commit/f2629e9e3c7ce3c3c8c025bcd8db551101cbc773"><code>f2629e9</code></a> Remove strict parameter (<a href="https://redirect.github.com/psf/requests/issues/6434">#6434</a>)</li>
<li><a href="https://github.com/psf/requests/commit/87d63de8739263bbe17034fba2285c79780da7e8"><code>87d63de</code></a> v2.29.0</li>
<li><a href="https://github.com/psf/requests/commit/51716c4ef390136b0d4b800ec7665dd5503e64fc"><code>51716c4</code></a> enable the warnings plugin (<a href="https://redirect.github.com/psf/requests/issues/6416">#6416</a>)</li>
<li><a href="https://github.com/psf/requests/commit/a7da1ab3498b10ec3a3582244c94b2845f8a8e71"><code>a7da1ab</code></a> try on ubuntu 22.04 (<a href="https://redirect.github.com/psf/requests/issues/6418">#6418</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/psf/requests/compare/v2.27.1...v2.31.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23673/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23673/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23673",
"html_url": "https://github.com/huggingface/transformers/pull/23673",
"diff_url": "https://github.com/huggingface/transformers/pull/23673.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23673.patch",
"merged_at": 1684834089000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23672
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23672/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23672/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23672/events
|
https://github.com/huggingface/transformers/issues/23672
| 1,721,026,409 |
I_kwDOCUB6oc5mlMdp
| 23,672 |
Audio related Transformer
|
{
"login": "chlorane",
"id": 39242468,
"node_id": "MDQ6VXNlcjM5MjQyNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/39242468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chlorane",
"html_url": "https://github.com/chlorane",
"followers_url": "https://api.github.com/users/chlorane/followers",
"following_url": "https://api.github.com/users/chlorane/following{/other_user}",
"gists_url": "https://api.github.com/users/chlorane/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chlorane/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chlorane/subscriptions",
"organizations_url": "https://api.github.com/users/chlorane/orgs",
"repos_url": "https://api.github.com/users/chlorane/repos",
"events_url": "https://api.github.com/users/chlorane/events{/privacy}",
"received_events_url": "https://api.github.com/users/chlorane/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @sanchit-gandhi ",
"Hey @chlorane - did you get the `ark`/`scp` files using the Kaldi library? I think in this case you're better off just passing the raw `.wav` files directly to the `transformers` models, rather than first through the Kaldi pre-processing to get `ark`/`scp` and then trying to force them through the `transformers` models. The only information I can find regarding converting `ark`/`scp` to `wav` is this thread: https://groups.google.com/g/kaldi-help/c/t6Ra3uHiDJQ/m/R6e01pF5CwAJ For questions such as this one, you might have more luck posting on the Hugging Face forum, where others in the community can pitch-in based on their experiences: https://discuss.huggingface.co\r\n\r\nNote that all audio models in the `transformers` library are designed to work directly with audio inputs, as per their respective papers. The `ark`/`scp` file formats first convert the raw audio inputs to either MFCC features or some other feature extracted form, thus these aren't compatible with models that expect raw audio inputs.",
"> Hey @chlorane - did you get the `ark`/`scp` files using the Kaldi library? I think in this case you're better off just passing the raw `.wav` files directly to the `transformers` models, rather than first through the Kaldi pre-processing to get `ark`/`scp` and then trying to force them through the `transformers` models. The only information I can find regarding converting `ark`/`scp` to `wav` is this thread: https://groups.google.com/g/kaldi-help/c/t6Ra3uHiDJQ/m/R6e01pF5CwAJ For questions such as this one, you might have more luck posting on the Hugging Face forum, where others in the community can pitch-in based on their experiences: https://discuss.huggingface.co\r\n> \r\n> Note that all audio models in the `transformers` library are designed to work directly with audio inputs, as per their respective papers. The `ark`/`scp` file formats first convert the raw audio inputs to either MFCC features or some other feature extracted form, thus these aren't compatible with models that expect raw audio inputs.\r\n\r\nBecause our dataset has only (full) ark and corresponding scp files. Some wav files in the dataset are not available",
"How did you obtain the `ark`/`scp` files - is this a reversible process? I think in this case converting the files to `.wav` is your best bet",
"> How did you obtain the `ark`/`scp` files - is this a reversible process? I think in this case converting the files to `.wav` is your best bet\r\n\r\nI think this is not very reversible. They are retrieved using Kaldi, but we don't have all the original voice in the dataset",
"Then I'm afraid I don't think it's possible to use these file formats. Have you asked on the Hugging Face forum? You could also check on the Kaldi repo to see if there's any advice there.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Leaving this closed since the audio file issue is related to a file format derived from the Kaldi repository (where I still think is the best place to ask for help!)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,690 | 1,690 |
NONE
| null |
I'm now trying to use audio-related Transformer, like Conformer, Audio Spectrogram Transformer, or Whisper, to process audio information. However, our input is ark (with certain dimensions of audio features) and scp files instead of wav form. I tried to use your library but it seems to have errors while processing ark/scp files. Are there any functions to process ark/scp directly, and are there any examples to show their usages? Thanks a lot
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23672/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23672/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23671
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23671/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23671/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23671/events
|
https://github.com/huggingface/transformers/issues/23671
| 1,720,838,209 |
I_kwDOCUB6oc5mkehB
| 23,671 |
AutoTokenizer Encode Error
|
{
"login": "congyingxia",
"id": 26128195,
"node_id": "MDQ6VXNlcjI2MTI4MTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/26128195?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/congyingxia",
"html_url": "https://github.com/congyingxia",
"followers_url": "https://api.github.com/users/congyingxia/followers",
"following_url": "https://api.github.com/users/congyingxia/following{/other_user}",
"gists_url": "https://api.github.com/users/congyingxia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/congyingxia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/congyingxia/subscriptions",
"organizations_url": "https://api.github.com/users/congyingxia/orgs",
"repos_url": "https://api.github.com/users/congyingxia/repos",
"events_url": "https://api.github.com/users/congyingxia/events{/privacy}",
"received_events_url": "https://api.github.com/users/congyingxia/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey! You are using an old version of the tokenizer. You should be using the one available [here](https://huggingface.co/huggyllama/llama-7b). This issue was already fixed. \r\n\r\nAutoTokenizer has to convert the slow tokenizer to a fast one, which takes of course a lot of time since the model was not saved on the shared repo. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,688 | 1,688 |
NONE
| null |
### System Info
For the LlamaTokenizer, I can get correct encoding result when directly loading from LlamaTokenizer. But the results are incorrect when using AutoTokenizer. Another issue is loading the AutoTokenizer much slower than directly loading the LlamaTokenizer. It take around 4 mins to load the tokenizer from the path when using AutoTokenizer, while it only takes one second if directly using the LlamaTokenizer.
### Who can help?
@ArthurZucker
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Python version: 3.8.16
transformers version: 4.28.1
Follow the given example:
```
from transformers import LlamaTokenizer, AutoTokenizer
model_path = 'openlm-research/open_llama_7b_700bt_preview'
str = ' is embarassed, because Samantha made snide comments about the shirt Rebecca was wearing.'
tokenizer1 = LlamaTokenizer.from_pretrained(model_path)
tokenizer2 = AutoTokenizer.from_pretrained(model_path)
ret1 = tokenizer1.encode(str, add_special_tokens=False)
ret2 = tokenizer2.encode(str, add_special_tokens=False)
print(ret1)
print(ret2)
```
### Expected behavior
ret1: [322, 2661, 285, 14363, 31844, 906, 23982, 985, 3668, 483, 4309, 562, 266, 13803, 15136, 393, 7732, 31843]
ret2: [31822, 322, 2661, 285, 14363, 31844, 906, 23982, 985, 3668, 483, 4309, 562, 266, 13803, 15136, 393, 7732, 31843]
ret1 is the expected output and ret2 is an error result from AutoTokenizer. AutoTokenizer add an additional token, 31822 (which is a space token), to the encoding results.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23671/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23671/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23670
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23670/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23670/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23670/events
|
https://github.com/huggingface/transformers/pull/23670
| 1,720,783,140 |
PR_kwDOCUB6oc5RE8Hv
| 23,670 |
Bump requests from 2.22.0 to 2.31.0 in /examples/research_projects/visual_bert
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23670). All of your documentation changes will be reflected on that endpoint."
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
Bumps [requests](https://github.com/psf/requests) from 2.22.0 to 2.31.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/psf/requests/releases">requests's releases</a>.</em></p>
<blockquote>
<h2>v2.31.0</h2>
<h2>2.31.0 (2023-05-22)</h2>
<p><strong>Security</strong></p>
<ul>
<li>
<p>Versions of Requests between v2.3.0 and v2.30.0 are vulnerable to potential
forwarding of <code>Proxy-Authorization</code> headers to destination servers when
following HTTPS redirects.</p>
<p>When proxies are defined with user info (<a href="https://user:pass@proxy:8080">https://user:pass@proxy:8080</a>), Requests
will construct a <code>Proxy-Authorization</code> header that is attached to the request to
authenticate with the proxy.</p>
<p>In cases where Requests receives a redirect response, it previously reattached
the <code>Proxy-Authorization</code> header incorrectly, resulting in the value being
sent through the tunneled connection to the destination server. Users who rely on
defining their proxy credentials in the URL are <em>strongly</em> encouraged to upgrade
to Requests 2.31.0+ to prevent unintentional leakage and rotate their proxy
credentials once the change has been fully deployed.</p>
<p>Users who do not use a proxy or do not supply their proxy credentials through
the user information portion of their proxy URL are not subject to this
vulnerability.</p>
<p>Full details can be read in our <a href="https://github.com/psf/requests/security/advisories/GHSA-j8r2-6x86-q33q">Github Security Advisory</a>
and <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-32681">CVE-2023-32681</a>.</p>
</li>
</ul>
<h2>v2.30.0</h2>
<h2>2.30.0 (2023-05-03)</h2>
<p><strong>Dependencies</strong></p>
<ul>
<li>
<p>⚠️ Added support for urllib3 2.0. ⚠️</p>
<p>This may contain minor breaking changes so we advise careful testing and
reviewing <a href="https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html">https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html</a>
prior to upgrading.</p>
<p>Users who wish to stay on urllib3 1.x can pin to <code>urllib3<2</code>.</p>
</li>
</ul>
<h2>v2.29.0</h2>
<h2>2.29.0 (2023-04-26)</h2>
<p><strong>Improvements</strong></p>
<ul>
<li>Requests now defers chunked requests to the urllib3 implementation to improve
standardization. (<a href="https://redirect.github.com/psf/requests/issues/6226">#6226</a>)</li>
<li>Requests relaxes header component requirements to support bytes/str subclasses. (<a href="https://redirect.github.com/psf/requests/issues/6356">#6356</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/psf/requests/blob/main/HISTORY.md">requests's changelog</a>.</em></p>
<blockquote>
<h2>2.31.0 (2023-05-22)</h2>
<p><strong>Security</strong></p>
<ul>
<li>
<p>Versions of Requests between v2.3.0 and v2.30.0 are vulnerable to potential
forwarding of <code>Proxy-Authorization</code> headers to destination servers when
following HTTPS redirects.</p>
<p>When proxies are defined with user info (<a href="https://user:pass@proxy:8080">https://user:pass@proxy:8080</a>), Requests
will construct a <code>Proxy-Authorization</code> header that is attached to the request to
authenticate with the proxy.</p>
<p>In cases where Requests receives a redirect response, it previously reattached
the <code>Proxy-Authorization</code> header incorrectly, resulting in the value being
sent through the tunneled connection to the destination server. Users who rely on
defining their proxy credentials in the URL are <em>strongly</em> encouraged to upgrade
to Requests 2.31.0+ to prevent unintentional leakage and rotate their proxy
credentials once the change has been fully deployed.</p>
<p>Users who do not use a proxy or do not supply their proxy credentials through
the user information portion of their proxy URL are not subject to this
vulnerability.</p>
<p>Full details can be read in our <a href="https://github.com/psf/requests/security/advisories/GHSA-j8r2-6x86-q33q">Github Security Advisory</a>
and <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-32681">CVE-2023-32681</a>.</p>
</li>
</ul>
<h2>2.30.0 (2023-05-03)</h2>
<p><strong>Dependencies</strong></p>
<ul>
<li>
<p>⚠️ Added support for urllib3 2.0. ⚠️</p>
<p>This may contain minor breaking changes so we advise careful testing and
reviewing <a href="https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html">https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html</a>
prior to upgrading.</p>
<p>Users who wish to stay on urllib3 1.x can pin to <code>urllib3<2</code>.</p>
</li>
</ul>
<h2>2.29.0 (2023-04-26)</h2>
<p><strong>Improvements</strong></p>
<ul>
<li>Requests now defers chunked requests to the urllib3 implementation to improve
standardization. (<a href="https://redirect.github.com/psf/requests/issues/6226">#6226</a>)</li>
<li>Requests relaxes header component requirements to support bytes/str subclasses. (<a href="https://redirect.github.com/psf/requests/issues/6356">#6356</a>)</li>
</ul>
<h2>2.28.2 (2023-01-12)</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/psf/requests/commit/147c8511ddbfa5e8f71bbf5c18ede0c4ceb3bba4"><code>147c851</code></a> v2.31.0</li>
<li><a href="https://github.com/psf/requests/commit/74ea7cf7a6a27a4eeb2ae24e162bcc942a6706d5"><code>74ea7cf</code></a> Merge pull request from GHSA-j8r2-6x86-q33q</li>
<li><a href="https://github.com/psf/requests/commit/302225334678490ec66b3614a9dddb8a02c5f4fe"><code>3022253</code></a> test on pypy 3.8 and pypy 3.9 on windows and macos (<a href="https://redirect.github.com/psf/requests/issues/6424">#6424</a>)</li>
<li><a href="https://github.com/psf/requests/commit/b639e66c816514e40604d46f0088fbceec1a5149"><code>b639e66</code></a> test on py3.12 (<a href="https://redirect.github.com/psf/requests/issues/6448">#6448</a>)</li>
<li><a href="https://github.com/psf/requests/commit/d3d504436ef0c2ac7ec8af13738b04dcc8c694be"><code>d3d5044</code></a> Fixed a small typo (<a href="https://redirect.github.com/psf/requests/issues/6452">#6452</a>)</li>
<li><a href="https://github.com/psf/requests/commit/2ad18e0e10e7d7ecd5384c378f25ec8821a10a29"><code>2ad18e0</code></a> v2.30.0</li>
<li><a href="https://github.com/psf/requests/commit/f2629e9e3c7ce3c3c8c025bcd8db551101cbc773"><code>f2629e9</code></a> Remove strict parameter (<a href="https://redirect.github.com/psf/requests/issues/6434">#6434</a>)</li>
<li><a href="https://github.com/psf/requests/commit/87d63de8739263bbe17034fba2285c79780da7e8"><code>87d63de</code></a> v2.29.0</li>
<li><a href="https://github.com/psf/requests/commit/51716c4ef390136b0d4b800ec7665dd5503e64fc"><code>51716c4</code></a> enable the warnings plugin (<a href="https://redirect.github.com/psf/requests/issues/6416">#6416</a>)</li>
<li><a href="https://github.com/psf/requests/commit/a7da1ab3498b10ec3a3582244c94b2845f8a8e71"><code>a7da1ab</code></a> try on ubuntu 22.04 (<a href="https://redirect.github.com/psf/requests/issues/6418">#6418</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/psf/requests/compare/v2.22.0...v2.31.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23670/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23670/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23670",
"html_url": "https://github.com/huggingface/transformers/pull/23670",
"diff_url": "https://github.com/huggingface/transformers/pull/23670.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23670.patch",
"merged_at": 1684834290000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23669
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23669/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23669/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23669/events
|
https://github.com/huggingface/transformers/issues/23669
| 1,720,770,660 |
I_kwDOCUB6oc5mkOBk
| 23,669 |
Expose default_to_square parameter in CLIPImageProcessor
|
{
"login": "shubhamgoel27",
"id": 6277335,
"node_id": "MDQ6VXNlcjYyNzczMzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6277335?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shubhamgoel27",
"html_url": "https://github.com/shubhamgoel27",
"followers_url": "https://api.github.com/users/shubhamgoel27/followers",
"following_url": "https://api.github.com/users/shubhamgoel27/following{/other_user}",
"gists_url": "https://api.github.com/users/shubhamgoel27/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shubhamgoel27/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shubhamgoel27/subscriptions",
"organizations_url": "https://api.github.com/users/shubhamgoel27/orgs",
"repos_url": "https://api.github.com/users/shubhamgoel27/repos",
"events_url": "https://api.github.com/users/shubhamgoel27/events{/privacy}",
"received_events_url": "https://api.github.com/users/shubhamgoel27/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @amyeroberts ",
"Hi @shubhamgoel27, \r\n\r\n`default_to_square` is used in `get_size_dict` in order to control the behaviour when converting old configuration values (int, tuples or lists) to the expected dictionary format for the `size` parameter. As such, it's tied to the image processor class and isn't meant to be modified. \r\n\r\nIf I've understood correctly, you'd like to use the CLIPImageProcessor, but not perform resizing or cropping of the images. For all image processors, all transformations can be turned on / off with the `do_xxx` flags either during instantiation or calling. To not resize or crop the input images: \r\n\r\n```python\r\nfrom transformers import CLIPImageProcessor\r\n\r\nimage_processor = CLIPImageProcessor(\"openai/clip-vit-base-patch32\")\r\ninputs = image_processor(images=images, do_resize=False, do_center_crop=False)\r\n```\r\n\r\nNote: if `do_resize=False` and `do_center_crop=False`, then all the input images but be of the same (height, width) dimensions in order to create a batch. ",
"Hey @amyeroberts ,\r\n\r\nThanks for the swift response. \r\n\r\nMy use-case is to not crop the image during the resize step, but still resize it to a smaller size (e.g. 224x244). So if the original image is 576x1024, the resize method would stretch/squeeze whichever dimension necessary and return a 224x224 image. But since the `default_to_square` parameter is hard-coded to `False`, I couldn't find a way to do so using the CLIPImageProcessor.\r\n\r\nP.S. The context around this is that I don't want to crop useful information out from either sides (horizontal or vertical) during the pre-processing stage, as it might have a lot of value for the domain I'm interested in. ",
"@shubhamgoel27 Is there a reason that you specifically want to use CLIP's image processor? All of the image processors are implemented to be aligned with the processing in the model paper, so it's not always possible to adapt it to every need. For your use case, the simplest approach would be to use another model's image processor, specifically ViT's. This image processor does three simple transformations: \r\n* Resizes the images to 224x224\r\n* Rescales the pixel values to be between 0-1\r\n* Normalizes the pixel values with a given image mean and std\r\n\r\nIf it's necessary to have the same normalization constants as those used in CLIP, these ca be passed in when instantiating the class e.g.:\r\n\r\n```python\r\nfrom transformers import ViTImageProcessor\r\nfrom transformers.utils.constants import OPENAI_CLIP_MEAN, OPENAI_CLIP_STD\r\n\r\nimage_processor = ViTImageProcessor(image_mean=OPENAI_CLIP_MEAN, image_std=OPENAI_CLIP_STD)\r\n```",
"@amyeroberts I'm finetuning the VIT component of a CLIP model, so was trying to use `CLIPImageProcessor`. But it looks like the `ViTImageProcessor` is allowing for both height and width in the resize method without using the `default_to_square=False`. So that should most likely be enough for my use-case. Thanks for pointing it out :) \r\n"
] | 1,684 | 1,685 | 1,685 |
NONE
| null |
I'm looking to train an image model without cropping the input's sides (either horizontally or vertically). But I noticed that in this [`CLIPImageProcessor` class](https://github.com/huggingface/transformers/blob/04ab5605fbb4ef207b10bf2772d88c53fc242e83/src/transformers/models/clip/image_processing_clip.py#LL51C9-L51C9), the `default_to_square` parameter is hard-coded to `False`. Is there any way I can still modify this so that my input in not cropped as a result of the resize and center_crop combination of transforms?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23669/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23669/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23668
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23668/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23668/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23668/events
|
https://github.com/huggingface/transformers/pull/23668
| 1,720,750,851 |
PR_kwDOCUB6oc5RE0oj
| 23,668 |
Bump requests from 2.22.0 to 2.31.0 in /examples/research_projects/lxmert
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23668). All of your documentation changes will be reflected on that endpoint."
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
[//]: # (dependabot-start)
⚠️ **Dependabot is rebasing this PR** ⚠️
Rebasing might not happen immediately, so don't worry if this takes some time.
Note: if you make any changes to this PR yourself, they will take precedence over the rebase.
---
[//]: # (dependabot-end)
Bumps [requests](https://github.com/psf/requests) from 2.22.0 to 2.31.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/psf/requests/releases">requests's releases</a>.</em></p>
<blockquote>
<h2>v2.31.0</h2>
<h2>2.31.0 (2023-05-22)</h2>
<p><strong>Security</strong></p>
<ul>
<li>
<p>Versions of Requests between v2.3.0 and v2.30.0 are vulnerable to potential
forwarding of <code>Proxy-Authorization</code> headers to destination servers when
following HTTPS redirects.</p>
<p>When proxies are defined with user info (<a href="https://user:pass@proxy:8080">https://user:pass@proxy:8080</a>), Requests
will construct a <code>Proxy-Authorization</code> header that is attached to the request to
authenticate with the proxy.</p>
<p>In cases where Requests receives a redirect response, it previously reattached
the <code>Proxy-Authorization</code> header incorrectly, resulting in the value being
sent through the tunneled connection to the destination server. Users who rely on
defining their proxy credentials in the URL are <em>strongly</em> encouraged to upgrade
to Requests 2.31.0+ to prevent unintentional leakage and rotate their proxy
credentials once the change has been fully deployed.</p>
<p>Users who do not use a proxy or do not supply their proxy credentials through
the user information portion of their proxy URL are not subject to this
vulnerability.</p>
<p>Full details can be read in our <a href="https://github.com/psf/requests/security/advisories/GHSA-j8r2-6x86-q33q">Github Security Advisory</a>
and <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-32681">CVE-2023-32681</a>.</p>
</li>
</ul>
<h2>v2.30.0</h2>
<h2>2.30.0 (2023-05-03)</h2>
<p><strong>Dependencies</strong></p>
<ul>
<li>
<p>⚠️ Added support for urllib3 2.0. ⚠️</p>
<p>This may contain minor breaking changes so we advise careful testing and
reviewing <a href="https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html">https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html</a>
prior to upgrading.</p>
<p>Users who wish to stay on urllib3 1.x can pin to <code>urllib3<2</code>.</p>
</li>
</ul>
<h2>v2.29.0</h2>
<h2>2.29.0 (2023-04-26)</h2>
<p><strong>Improvements</strong></p>
<ul>
<li>Requests now defers chunked requests to the urllib3 implementation to improve
standardization. (<a href="https://redirect.github.com/psf/requests/issues/6226">#6226</a>)</li>
<li>Requests relaxes header component requirements to support bytes/str subclasses. (<a href="https://redirect.github.com/psf/requests/issues/6356">#6356</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/psf/requests/blob/main/HISTORY.md">requests's changelog</a>.</em></p>
<blockquote>
<h2>2.31.0 (2023-05-22)</h2>
<p><strong>Security</strong></p>
<ul>
<li>
<p>Versions of Requests between v2.3.0 and v2.30.0 are vulnerable to potential
forwarding of <code>Proxy-Authorization</code> headers to destination servers when
following HTTPS redirects.</p>
<p>When proxies are defined with user info (<a href="https://user:pass@proxy:8080">https://user:pass@proxy:8080</a>), Requests
will construct a <code>Proxy-Authorization</code> header that is attached to the request to
authenticate with the proxy.</p>
<p>In cases where Requests receives a redirect response, it previously reattached
the <code>Proxy-Authorization</code> header incorrectly, resulting in the value being
sent through the tunneled connection to the destination server. Users who rely on
defining their proxy credentials in the URL are <em>strongly</em> encouraged to upgrade
to Requests 2.31.0+ to prevent unintentional leakage and rotate their proxy
credentials once the change has been fully deployed.</p>
<p>Users who do not use a proxy or do not supply their proxy credentials through
the user information portion of their proxy URL are not subject to this
vulnerability.</p>
<p>Full details can be read in our <a href="https://github.com/psf/requests/security/advisories/GHSA-j8r2-6x86-q33q">Github Security Advisory</a>
and <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-32681">CVE-2023-32681</a>.</p>
</li>
</ul>
<h2>2.30.0 (2023-05-03)</h2>
<p><strong>Dependencies</strong></p>
<ul>
<li>
<p>⚠️ Added support for urllib3 2.0. ⚠️</p>
<p>This may contain minor breaking changes so we advise careful testing and
reviewing <a href="https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html">https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html</a>
prior to upgrading.</p>
<p>Users who wish to stay on urllib3 1.x can pin to <code>urllib3<2</code>.</p>
</li>
</ul>
<h2>2.29.0 (2023-04-26)</h2>
<p><strong>Improvements</strong></p>
<ul>
<li>Requests now defers chunked requests to the urllib3 implementation to improve
standardization. (<a href="https://redirect.github.com/psf/requests/issues/6226">#6226</a>)</li>
<li>Requests relaxes header component requirements to support bytes/str subclasses. (<a href="https://redirect.github.com/psf/requests/issues/6356">#6356</a>)</li>
</ul>
<h2>2.28.2 (2023-01-12)</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/psf/requests/commit/147c8511ddbfa5e8f71bbf5c18ede0c4ceb3bba4"><code>147c851</code></a> v2.31.0</li>
<li><a href="https://github.com/psf/requests/commit/74ea7cf7a6a27a4eeb2ae24e162bcc942a6706d5"><code>74ea7cf</code></a> Merge pull request from GHSA-j8r2-6x86-q33q</li>
<li><a href="https://github.com/psf/requests/commit/302225334678490ec66b3614a9dddb8a02c5f4fe"><code>3022253</code></a> test on pypy 3.8 and pypy 3.9 on windows and macos (<a href="https://redirect.github.com/psf/requests/issues/6424">#6424</a>)</li>
<li><a href="https://github.com/psf/requests/commit/b639e66c816514e40604d46f0088fbceec1a5149"><code>b639e66</code></a> test on py3.12 (<a href="https://redirect.github.com/psf/requests/issues/6448">#6448</a>)</li>
<li><a href="https://github.com/psf/requests/commit/d3d504436ef0c2ac7ec8af13738b04dcc8c694be"><code>d3d5044</code></a> Fixed a small typo (<a href="https://redirect.github.com/psf/requests/issues/6452">#6452</a>)</li>
<li><a href="https://github.com/psf/requests/commit/2ad18e0e10e7d7ecd5384c378f25ec8821a10a29"><code>2ad18e0</code></a> v2.30.0</li>
<li><a href="https://github.com/psf/requests/commit/f2629e9e3c7ce3c3c8c025bcd8db551101cbc773"><code>f2629e9</code></a> Remove strict parameter (<a href="https://redirect.github.com/psf/requests/issues/6434">#6434</a>)</li>
<li><a href="https://github.com/psf/requests/commit/87d63de8739263bbe17034fba2285c79780da7e8"><code>87d63de</code></a> v2.29.0</li>
<li><a href="https://github.com/psf/requests/commit/51716c4ef390136b0d4b800ec7665dd5503e64fc"><code>51716c4</code></a> enable the warnings plugin (<a href="https://redirect.github.com/psf/requests/issues/6416">#6416</a>)</li>
<li><a href="https://github.com/psf/requests/commit/a7da1ab3498b10ec3a3582244c94b2845f8a8e71"><code>a7da1ab</code></a> try on ubuntu 22.04 (<a href="https://redirect.github.com/psf/requests/issues/6418">#6418</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/psf/requests/compare/v2.22.0...v2.31.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23668/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23668/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23668",
"html_url": "https://github.com/huggingface/transformers/pull/23668",
"diff_url": "https://github.com/huggingface/transformers/pull/23668.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23668.patch",
"merged_at": 1684834315000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23667
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23667/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23667/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23667/events
|
https://github.com/huggingface/transformers/issues/23667
| 1,720,667,952 |
I_kwDOCUB6oc5mj08w
| 23,667 |
Imports in multiline try blocks are not properly ignored when determining the necessary packages for a modeling file
|
{
"login": "dakinggg",
"id": 43149077,
"node_id": "MDQ6VXNlcjQzMTQ5MDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/43149077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dakinggg",
"html_url": "https://github.com/dakinggg",
"followers_url": "https://api.github.com/users/dakinggg/followers",
"following_url": "https://api.github.com/users/dakinggg/following{/other_user}",
"gists_url": "https://api.github.com/users/dakinggg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dakinggg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dakinggg/subscriptions",
"organizations_url": "https://api.github.com/users/dakinggg/orgs",
"repos_url": "https://api.github.com/users/dakinggg/repos",
"events_url": "https://api.github.com/users/dakinggg/events{/privacy}",
"received_events_url": "https://api.github.com/users/dakinggg/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I'd make the change myself, but I can't tell where this codepath is tested (implicitly everywhere?), so will leave to a dev more familiar with `transformers` dev process and tests.",
"This is the right fix indeed. I don't think that function is tested yet, but you can add a new test file in `tests/utils/` named `test_dynamic_module_utils` with your failing test if you want to go the extra mile."
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.29.2
- Platform: macOS-13.3.1-arm64-arm-64bit
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: N/A
- Using distributed or parallel set-up in script?: N/A
### Who can help?
@sgugger
### Reproduction
The minimal repro is to just call https://github.com/huggingface/transformers/blob/v4.29.2/src/transformers/dynamic_module_utils.py#L118-L134 on a file that has a multiline try import, e.g.
```
try:
from package import function
from package2 import function
except:
pass
```
and run the `get_imports` function on it. The output will be `['package', 'package2']`, when it should be `[]`
### Expected behavior
Imports in multiline try blocks should be ignored when determining what packages a modeling file requires. I believe https://github.com/huggingface/transformers/blob/ba7054533fa455e8b2dd35feb077e0c7aae646b3/src/transformers/dynamic_module_utils.py#L126 just needs to be modified to include `flags=re.MULTILINE | re.DOTALL`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23667/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23667/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23666
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23666/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23666/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23666/events
|
https://github.com/huggingface/transformers/issues/23666
| 1,720,633,681 |
I_kwDOCUB6oc5mjslR
| 23,666 |
Splitting the transformers dependencies
|
{
"login": "gokceneraslan",
"id": 1140359,
"node_id": "MDQ6VXNlcjExNDAzNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1140359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gokceneraslan",
"html_url": "https://github.com/gokceneraslan",
"followers_url": "https://api.github.com/users/gokceneraslan/followers",
"following_url": "https://api.github.com/users/gokceneraslan/following{/other_user}",
"gists_url": "https://api.github.com/users/gokceneraslan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gokceneraslan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gokceneraslan/subscriptions",
"organizations_url": "https://api.github.com/users/gokceneraslan/orgs",
"repos_url": "https://api.github.com/users/gokceneraslan/repos",
"events_url": "https://api.github.com/users/gokceneraslan/events{/privacy}",
"received_events_url": "https://api.github.com/users/gokceneraslan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This is already the case.",
"Oh I didn't know! So `'pip install transformers[torch]'` doesn't install jax or tensorflow?",
"No. You may have it in your environment from other installs, but `pip install transformers[torch]` will only install Transformers and its core dependencies (very light) and torch.",
"Thanks so much!"
] | 1,684 | 1,684 | 1,684 |
NONE
| null |
### Feature request
Right now this Python packages has a lot of dependencies including major DL frameworks like PyTorch, TensorFlow and Jax. This causes some complexity in the downstream packages that use `transformers` e.g. CI/CD environments get gigantic and it likely creates dependency issues e.g.

and

This is a CI exception of a Torch code using enformer-pytorch, which depends on transformers. Although there is nothing using Jax either in the Torch code or in the enformer-pytorch, we have to solve this Jax-related issue now.
I was wondering if you can somehow split the dependencies into groups e.g. `'pip install transformers[jax]'` or `'pip install transformers[pytorch]'`. Let me know what you think.
### Motivation
I think splitting the dependencies into reasonable groups would improve installation or testing of the downstream packages using transformers.
### Your contribution
I am not able to contribute, but I think my suggestions is relatively simple to implement.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23666/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23666/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23665
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23665/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23665/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23665/events
|
https://github.com/huggingface/transformers/issues/23665
| 1,720,474,817 |
I_kwDOCUB6oc5mjFzB
| 23,665 |
Metas MMS speech recognition
|
{
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] |
[
"Interested Model!! Thank @flozi00 to open request.",
"👀 https://huggingface.co/models?other=mms",
"> :eyes: https://huggingface.co/models?other=mms\r\n\r\nNice! This is the pretrained model. \r\nI am looking to also convert the ASR models; any pointers? \r\n\r\nThe scripts I am looking at (such as [this very useful boi](https://huggingface.co/HfSpeechUtils/convert_wav2vec2_to_hf/blob/main/run_convert.sh)) require dict and HF config but maybe it is the same configuration as `facebook/wav2vec2-large` :thinking: ",
"Leeez go - working on the conversion as we speak :-) ",
"I also made a request in _#23811_. Looking forward to it!",
"PR merged.\r\n\r\nAlso see:\r\n- https://huggingface.co/docs/transformers/main/en/model_doc/mms\r\n- https://github.com/huggingface/transformers/pull/23813\r\n- https://huggingface.co/facebook/mms-1b-all"
] | 1,684 | 1,685 | 1,685 |
CONTRIBUTOR
| null |
### Model description
In their Blogpost they are writing it's 1B sized wav2vec 2 models, so only a new converter script should be needed?
A nice alternative to compare to whisper
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
https://github.com/facebookresearch/fairseq/tree/main/examples/mms
@sanchit-gandhi
@patrickvonplaten FYI
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23665/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23665/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23664
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23664/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23664/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23664/events
|
https://github.com/huggingface/transformers/pull/23664
| 1,720,235,542 |
PR_kwDOCUB6oc5RC_OP
| 23,664 |
Update all no_trainer with skip_first_batches
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1936351150,
"node_id": "MDU6TGFiZWwxOTM2MzUxMTUw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Examples",
"name": "Examples",
"color": "d4c5f9",
"default": false,
"description": "Which is related to examples in general"
}
] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23664). All of your documentation changes will be reflected on that endpoint."
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR updates all `no_trainer` examples to use `skip_first_batches` properly from the `Accelerator`/Accelerate when resuming from a checkpoint
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23664/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23664/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23664",
"html_url": "https://github.com/huggingface/transformers/pull/23664",
"diff_url": "https://github.com/huggingface/transformers/pull/23664.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23664.patch",
"merged_at": 1684781372000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23663
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23663/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23663/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23663/events
|
https://github.com/huggingface/transformers/pull/23663
| 1,720,229,548 |
PR_kwDOCUB6oc5RC947
| 23,663 |
TF version compatibility fixes
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger I've added a proper framework inference function, put it in `utils/generic` and moved everything over to that. That should catch any edge cases in future, unless one of the frameworks gets renamed entirely :sweat_smile: "
] | 1,684 | 1,684 | 1,684 |
MEMBER
| null |
This PR makes several changes to core methods to support the upcoming 2.13 release of TF and generally futureproof against any other upcoming changes that might happen.
The core source of all of these problems is that Keras has been very mobile inside TensorFlow: Initially we used `tf.python.keras` until they told us this was a deprecated copy of Keras that shouldn't be there, and it got removed. We then switched to `tf.keras`, but now Keras has fully moved into its own library and namespace again. Although this is still mirrored at `tf.keras`, for the newest version of TF we'll just `import keras`.
There are several other related problems where our code assumed that parts of Keras that weren't really part of the public API would stay where they were. Not all of these have caused problems yet (that I know of) but they look very risky to me, and so I made some general fixes. This might surface some hidden bugs!
Fixes #23352
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23663/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23663/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23663",
"html_url": "https://github.com/huggingface/transformers/pull/23663",
"diff_url": "https://github.com/huggingface/transformers/pull/23663.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23663.patch",
"merged_at": 1684856531000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23662
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23662/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23662/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23662/events
|
https://github.com/huggingface/transformers/pull/23662
| 1,720,107,094 |
PR_kwDOCUB6oc5RCivA
| 23,662 |
Enable prompts on the Hub
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
This PR enables users to share prompt templates for the Agent on the Hub by supporting a repo_id instead of a string for prompts. In terms of API I'm still hesitating between
1. Let the user pass a the prompt template or a repo for both `run_prompt_template` and `chat_prompt_template` which has the cons of a bit of a brittle check (to determine if we have a repo ID or a real prompt) and the repetition of the repo twice for a repo that implements both a `run_prompt_template` and a `chat_prompt_template`
2. Add a new argument `prompts_repo_id` which has the cons of being a new argument in a public API and to add some checks if the repo only implements one of the prompts but not both.
I went with 1 in this PR but curious to have your advice.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23662/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23662/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23662",
"html_url": "https://github.com/huggingface/transformers/pull/23662",
"diff_url": "https://github.com/huggingface/transformers/pull/23662.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23662.patch",
"merged_at": 1684958953000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23660
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23660/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23660/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23660/events
|
https://github.com/huggingface/transformers/issues/23660
| 1,719,983,818 |
I_kwDOCUB6oc5mhN7K
| 23,660 |
TransformerEngine FP8 inference
|
{
"login": "SinanAkkoyun",
"id": 43215895,
"node_id": "MDQ6VXNlcjQzMjE1ODk1",
"avatar_url": "https://avatars.githubusercontent.com/u/43215895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SinanAkkoyun",
"html_url": "https://github.com/SinanAkkoyun",
"followers_url": "https://api.github.com/users/SinanAkkoyun/followers",
"following_url": "https://api.github.com/users/SinanAkkoyun/following{/other_user}",
"gists_url": "https://api.github.com/users/SinanAkkoyun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SinanAkkoyun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SinanAkkoyun/subscriptions",
"organizations_url": "https://api.github.com/users/SinanAkkoyun/orgs",
"repos_url": "https://api.github.com/users/SinanAkkoyun/repos",
"events_url": "https://api.github.com/users/SinanAkkoyun/events{/privacy}",
"received_events_url": "https://api.github.com/users/SinanAkkoyun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@SinanAkkoyun have you find the solution how to use transformerengine with Llama?",
"Any updates?"
] | 1,684 | 1,693 | 1,688 |
NONE
| null |
### Feature request
Hi!
Could anyone please help me with using HuggingFace models (LLaMa [or if LLaMa is difficult, MPT-7b]) with the TransformerEngine TE FP8 inference? We really need the speedup
https://github.com/NVIDIA/TransformerEngine/issues/199
This is a somewhat related issue to this topic.
### Motivation
Faster inference and more specialized tensor operations means less cost and less latency.
### Your contribution
I would really love to test suggestions out as I have temporary access to a H100 cloud GPU.
I am not sufficient in porting the models myself which is why I created this issue.
I really appreciate any help, thank you very much.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23660/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23660/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23659
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23659/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23659/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23659/events
|
https://github.com/huggingface/transformers/pull/23659
| 1,719,957,114 |
PR_kwDOCUB6oc5RCCSK
| 23,659 |
Add PerSAM [bis]
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds support for [PerSAM](https://arxiv.org/abs/2305.03048). Simplification of #23652.
2 optional arguments are introduced:
- attention_similarity
- target_embedding.
Those are used by PerSAM, a method which enables SAM to be quickly adapted to new concepts.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23659/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23659/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23659",
"html_url": "https://github.com/huggingface/transformers/pull/23659",
"diff_url": "https://github.com/huggingface/transformers/pull/23659.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23659.patch",
"merged_at": 1684834992000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23658
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23658/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23658/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23658/events
|
https://github.com/huggingface/transformers/pull/23658
| 1,719,922,388 |
PR_kwDOCUB6oc5RB6mw
| 23,658 |
Update workflow files
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23658). All of your documentation changes will be reflected on that endpoint."
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
Same as #23465 but for daily CI and push CI.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23658/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23658/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23658",
"html_url": "https://github.com/huggingface/transformers/pull/23658",
"diff_url": "https://github.com/huggingface/transformers/pull/23658.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23658.patch",
"merged_at": 1684783612000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23657
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23657/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23657/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23657/events
|
https://github.com/huggingface/transformers/pull/23657
| 1,719,790,120 |
PR_kwDOCUB6oc5RBd36
| 23,657 |
Muellerzr fix deepspeed
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23657). All of your documentation changes will be reflected on that endpoint."
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes the slow tests by avoiding a recursion issue with `self.n_gpus`. It has `self.n_gpu` first check to see if we've assigned/spawned `self._n_gpu`, and if not then call setup_devices. This let's us maintain the clean `ParallelMode.DISTRIBUTED` check, without needing the complex check block in `setup_devices`.
I've confirmed DeepSpeed tests pass.
If we'd rather not do this, then the logic at https://github.com/huggingface/transformers/blob/main/src/transformers/training_args.py#L1732-L1735 would need to be repeated (which can be done, just a bit messy)
Fixes # (issue)
Failing deepspeed tests on nightly
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23657/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23657/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23657",
"html_url": "https://github.com/huggingface/transformers/pull/23657",
"diff_url": "https://github.com/huggingface/transformers/pull/23657.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23657.patch",
"merged_at": 1684768974000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23656
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23656/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23656/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23656/events
|
https://github.com/huggingface/transformers/pull/23656
| 1,719,722,651 |
PR_kwDOCUB6oc5RBPE-
| 23,656 |
Fix SAM tests and use smaller checkpoints
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ydshieh Ah! That test was also sneakily loading `sam-vit-huge`. I've fixed it, it should work fine now.",
"All pass now 🚀 "
] | 1,684 | 1,684 | 1,684 |
MEMBER
| null |
This PR moves all the SAM tests for both PT and TF to the `sam-vit-base` instead of the `sam-vit-huge` (!) checkpoint they were using before. The huge checkpoint made the tests quite slow and caused OOM issues in TensorFlow.
It also fixes an issue with the `check_pt_tf_equivalence` test in the PyTorch tests, which should now pass correctly.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23656/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23656/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23656",
"html_url": "https://github.com/huggingface/transformers/pull/23656",
"diff_url": "https://github.com/huggingface/transformers/pull/23656.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23656.patch",
"merged_at": 1684777355000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23655
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23655/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23655/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23655/events
|
https://github.com/huggingface/transformers/pull/23655
| 1,719,676,839 |
PR_kwDOCUB6oc5RBFD9
| 23,655 |
Add EnCodec model
|
{
"login": "hollance",
"id": 346853,
"node_id": "MDQ6VXNlcjM0Njg1Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hollance",
"html_url": "https://github.com/hollance",
"followers_url": "https://api.github.com/users/hollance/followers",
"following_url": "https://api.github.com/users/hollance/following{/other_user}",
"gists_url": "https://api.github.com/users/hollance/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hollance/subscriptions",
"organizations_url": "https://api.github.com/users/hollance/orgs",
"repos_url": "https://api.github.com/users/hollance/repos",
"events_url": "https://api.github.com/users/hollance/events{/privacy}",
"received_events_url": "https://api.github.com/users/hollance/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] |
[
"What still needs to be done at this point:\r\n\r\n- remove `assert`s\r\n- remove `einops` stuff: `rearrange`, `repeat`\r\n- `QuantizedResult` -> `QuantizerOutput`\r\n- I don't like the names in the `EncodecOutput` etc classes, so left some TODO items for renaming them\r\n- rename arguments in config file (see TODO items in that file)\r\n- add doc comments for all arguments in config file\r\n- clean up `_linear_overlap_add`, the padding functions, and `_kmeans`\r\n- improve variable names in the modeling code\r\n- get rid of `activation_params` from config\r\n- fix issues with padding on the 48khz model, since the last (incomplete) frame is different than with the original model\r\n- remove all training code; I suggest we throw an exception from all `forward` methods\r\n- doc strings in modeling code\r\n- MDX file\r\n- and probably more stuff\r\n\r\nMost of these are small items.",
"TODO list:\r\n\r\n**Done**\r\n- <s> remove asserts </s>\r\n- <s> remove einops stuff: rearrange, repeat </s>\r\n- <s> QuantizedResult -> QuantizerOutput </s>\r\n- <s> I don't like the names in the EncodecOutput etc classes, so left some TODO items for renaming them </s>\r\n- <s> clean up _linear_overlap_add, the padding functions, and _kmeans </s>\r\n- <s> remove all training code; I suggest we throw an exception from all forward methods </s>\r\n- <s> improve variable names in the modeling code </s>\r\n- <s> get rid of activation_params from config </s>\r\n- <s> rename arguments in config file (see TODO items in that file) </s>\r\n- <s> add doc comments for all arguments in config file </s>\r\n\r\n**Final TODO**:\r\n- doc strings in modeling code\r\n- MDX file\r\n- fix issues with padding on the 48khz model, since the last (incomplete) frame is different than with the original model",
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the review. Before asking for a final review:\r\n- [x] Finish designing the modelling integration tests.\r\n- [x] Finish testing all edge cases for the feature extractor.\r\nWorking on this! ",
"Ok, addressed everything! feel free to merge if it's good with you @amyeroberts "
] | 1,684 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
# What does this PR do?
Adds the EnCodec neural codec from the [High Fidelity Neural Audio Compression](https://arxiv.org/abs/2210.13438) paper.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23655/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23655/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23655",
"html_url": "https://github.com/huggingface/transformers/pull/23655",
"diff_url": "https://github.com/huggingface/transformers/pull/23655.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23655.patch",
"merged_at": 1686761844000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23654
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23654/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23654/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23654/events
|
https://github.com/huggingface/transformers/issues/23654
| 1,719,601,176 |
I_kwDOCUB6oc5mfwgY
| 23,654 |
QuestionAnsweringPipeline is never able to truncate the question
|
{
"login": "Marcusntnu",
"id": 31349528,
"node_id": "MDQ6VXNlcjMxMzQ5NTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/31349528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Marcusntnu",
"html_url": "https://github.com/Marcusntnu",
"followers_url": "https://api.github.com/users/Marcusntnu/followers",
"following_url": "https://api.github.com/users/Marcusntnu/following{/other_user}",
"gists_url": "https://api.github.com/users/Marcusntnu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Marcusntnu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Marcusntnu/subscriptions",
"organizations_url": "https://api.github.com/users/Marcusntnu/orgs",
"repos_url": "https://api.github.com/users/Marcusntnu/repos",
"events_url": "https://api.github.com/users/Marcusntnu/events{/privacy}",
"received_events_url": "https://api.github.com/users/Marcusntnu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Are you sure it' s OK to truncate questions ? For the actual results of the model ?\r\n\r\nWe can definitely add more control over the truncation process, currently it's quite hardcoded because of all the Squad specific controls: https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/question_answering.py#L409\r\n\r\nWe could switch to `tokenizer_kwargs` to allow any parameters to be passed.\r\n\r\n@sgugger for confirmation it' s a good idea ?",
"I don't think it is a good idea. This seems like a specific use-case for which you can use the tokenizer and model directly, instead of using the `pipeline`.",
"Understandable 👍"
] | 1,684 | 1,684 | 1,684 |
NONE
| null |
### System Info
I was trying out feeding some in-context examples question side to the pipeline, and based on the design of the QuestionAnsweringPipeline it's basically impossible to get the truncation done at the question side and not the context.
question_first = bool(self.tokenizer.padding_side == "right") in the [docs](https://huggingface.co/transformers/v4.6.0/_modules/transformers/pipelines/question_answering.html) makes sure that whatever you try it's not possible to get actual question truncation (unless you just write out a whole new pipeline basically).
Think this should be easily fixable if you made it possible to "force" truncation=longest.
tagging @Narsil as it's pipeline related.
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Setup whatever QAPipeline
1. Send a long question together with whatever context that would require truncation.
2. See: Exception: Truncation error: Sequence to truncate too short to respect the provided max_length
### Expected behavior
It should be possible to pass very long questions, at least if specifying truncation=longest (this just get's overrides now).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23654/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23654/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23653
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23653/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23653/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23653/events
|
https://github.com/huggingface/transformers/issues/23653
| 1,719,580,136 |
I_kwDOCUB6oc5mfrXo
| 23,653 |
RWKV - loss.backward() failed
|
{
"login": "LetianLee",
"id": 73881739,
"node_id": "MDQ6VXNlcjczODgxNzM5",
"avatar_url": "https://avatars.githubusercontent.com/u/73881739?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LetianLee",
"html_url": "https://github.com/LetianLee",
"followers_url": "https://api.github.com/users/LetianLee/followers",
"following_url": "https://api.github.com/users/LetianLee/following{/other_user}",
"gists_url": "https://api.github.com/users/LetianLee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LetianLee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LetianLee/subscriptions",
"organizations_url": "https://api.github.com/users/LetianLee/orgs",
"repos_url": "https://api.github.com/users/LetianLee/repos",
"events_url": "https://api.github.com/users/LetianLee/events{/privacy}",
"received_events_url": "https://api.github.com/users/LetianLee/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"+1 on this issue. Any update?",
"Checkout [this reply](https://github.com/huggingface/transformers/pull/22797#issuecomment-1546740612). I guess it's the same issue though not looked into it.",
"> Checkout [this reply](https://github.com/huggingface/transformers/pull/22797#issuecomment-1546740612). I guess it's the same issue though not looked into it.\r\n\r\nThanks for the help. I still have the same bug as before. \r\n```\r\n File \"modeling_rwkv.py\", line 783, in forward\r\n hidden_states, state, attentions = block(\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 889, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"modeling_rwkv.py\", line 510, in forward\r\n attention, state = self.attention(self.ln1(hidden), state=state, use_cache=use_cache)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 889, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"modeling_rwkv.py\", line 436, in forward\r\n rwkv, layer_state = rwkv_linear_attention(\r\n File \"modeling_rwkv.py\", line 377, in rwkv_linear_attention\r\n return rwkv_linear_attention_cpu(time_decay, time_first, key, value, state=state, return_state=return_state)\r\n File \"modeling_rwkv.py\", line 361, in rwkv_linear_attention_cpu\r\n den_state = e1 * den_state + e2\r\n```\r\n\r\nAny idea about this?\r\n",
"I'm experiencing loss.backward() failure when using custom cuda kernel. In other words, whenever the setup branches towards the else path below:\r\n\r\n```\r\n if rwkv_cuda_kernel is None or no_cuda or one_token:\r\n return rwkv_linear_attention_cpu(time_decay, time_first, key, value, state=state, return_state=return_state)\r\n else:\r\n return RwkvLinearAttention.apply(time_decay, time_first, key, value, state, return_state)\r\n```\r\n\r\nloss.backward() throws out an error \"TypeError: backward() takes 2 positional arguments but 3 were given\".\r\nWhen rwkv_linear_attention_cpu is called instead, things work out fine.\r\n\r\nAny ideas on what might contribute to this?",
"Pinging both @sgugger and @younesbelkada as they ported the model ",
"I can confirm the backward fails both on CPU (first error) and on GPU (last error). Diving into this.",
"On CPU a simple workaround is to set `model.train()` (which you would need to do for real training anyway 😅 ), the bug comes from gradients of the state. I'll try to dig more, but it doesn't sounds super urgent.\r\n\r\nFor GPU the fix should be in a PR later today/tomorrow morning.",
"GPU fix was merged in #23774 ",
"Thanks! I have verified that it is working, and the fine-tuning process is also functioning properly after this issue has been fixed."
] | 1,684 | 1,685 | 1,685 |
NONE
| null |
### System Info
- `transformers` version: 4.29.2
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Below is the code from the official example: https://huggingface.co/docs/transformers/main/en/model_doc/rwkv#transformers.RwkvForCausalLM
```
import torch
from transformers import AutoTokenizer, RwkvForCausalLM
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-4-169m-pile")
model = RwkvForCausalLM.from_pretrained("RWKV/rwkv-4-169m-pile")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
loss = outputs.loss
```
2. I only added this line `loss.backward()` to run but it failed:
```
import torch
from transformers import AutoTokenizer, RwkvForCausalLM
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-4-169m-pile")
model = RwkvForCausalLM.from_pretrained("RWKV/rwkv-4-169m-pile")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
loss = outputs.loss
loss.backward()
```
3. Error messages:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[<ipython-input-7-ffc1d58be8b3>](https://localhost:8080/#) in <cell line: 10>()
8 outputs = model(**inputs, labels=inputs["input_ids"])
9 loss = outputs.loss
---> 10 loss.backward()
1 frames
[/usr/local/lib/python3.10/dist-packages/torch/_tensor.py](https://localhost:8080/#) in backward(self, gradient, retain_graph, create_graph, inputs)
485 inputs=inputs,
486 )
--> 487 torch.autograd.backward(
488 self, gradient, retain_graph, create_graph, inputs=inputs
489 )
[/usr/local/lib/python3.10/dist-packages/torch/autograd/__init__.py](https://localhost:8080/#) in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
198 # some Python versions print out the first line of a multi-line function
199 # calls in the traceback and some print out the last line
--> 200 Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
201 tensors, grad_tensors_, retain_graph, create_graph, inputs,
202 allow_unreachable=True, accumulate_grad=True) # Calls into the C++ engine to run the backward pass
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 768]], which is output 0 of AsStridedBackward0, is at version 12; expected version 11 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
```
### Expected behavior
loss.backward() should work out.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23653/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23653/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23652
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23652/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23652/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23652/events
|
https://github.com/huggingface/transformers/pull/23652
| 1,719,567,819 |
PR_kwDOCUB6oc5RAtQV
| 23,652 |
Add PerSAM
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Ok, will close this PR in favor of modifying `modeling_sam.py`."
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds the PerSAM model.
Question: when you do:
```
from transformers import PerSamModel
model = PerSamModel.from_pretrained("facebook/sam-vit-huge")
```
you get this warning:
```
You are using a model of type sam to instantiate a model of type persam. This is not supported for all configurations of models and can yield errors.
```
was wondering whether we could suppress this warning. PerSAM uses the exact same weights as the original SAM model, just modifies the forward pass with 2 additional arguments. Currently the model_type is set to "persam" in `PerSamConfig`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23652/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23652/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23652",
"html_url": "https://github.com/huggingface/transformers/pull/23652",
"diff_url": "https://github.com/huggingface/transformers/pull/23652.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23652.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23651
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23651/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23651/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23651/events
|
https://github.com/huggingface/transformers/issues/23651
| 1,719,564,404 |
I_kwDOCUB6oc5mfnh0
| 23,651 |
How to use FSDP or DDP with Seq2SeqTrainer?
|
{
"login": "VafaKnm",
"id": 103993288,
"node_id": "U_kgDOBjLPyA",
"avatar_url": "https://avatars.githubusercontent.com/u/103993288?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VafaKnm",
"html_url": "https://github.com/VafaKnm",
"followers_url": "https://api.github.com/users/VafaKnm/followers",
"following_url": "https://api.github.com/users/VafaKnm/following{/other_user}",
"gists_url": "https://api.github.com/users/VafaKnm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VafaKnm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VafaKnm/subscriptions",
"organizations_url": "https://api.github.com/users/VafaKnm/orgs",
"repos_url": "https://api.github.com/users/VafaKnm/repos",
"events_url": "https://api.github.com/users/VafaKnm/events{/privacy}",
"received_events_url": "https://api.github.com/users/VafaKnm/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"You cannot set yourself the `local_rank` variable in the training arguments. This is done when you launch your script in a distributed fashion with `torchrun`.",
"I removed `local_rank` from training arguments and lunch train script with `torchrun train_script.py` but i got that error again",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,688 | 1,688 |
NONE
| null |
### System Info
```shell
python = 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0]
transformer version = '4.28.1'
torch version = '2.0.0+cu117'
GPUs = 2 * GTX 1080 Ti, each one 11G RAM
cuda information = Cuda compilation tools, release 11.5, V11.5.119, Build cuda_11.5.r11.5/compiler.30672275_0
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I have 2 GTX 1080 Ti GPUs(11G RAM each one) and i want to fine-tune openai/whisper-small model which one of the hugging face transformers models. Also, I want to use Fully Sharded Data Parallel(FSDP) via seq2seqTrainer but i got error.
**Here is my code related to data:**
1.
```
def prepare_dataset(batch):
batch["input_features"] = feature_extractor(batch["audio"], sampling_rate=16000).input_features[0]
batch["labels"] = tokenizer(batch["text"]).input_ids
batch["input_features"] = torch.tensor(batch["input_features"])
batch["labels"] = torch.tensor(batch["labels"])
return batch
```
2.
```
train_ds = train_ds.map(prepare_dataset, remove_columns=train_ds.column_names)
val_ds = val_ds.map(prepare_dataset, remove_columns=val_ds.column_names)
```
**this is how i build the model and its parameters:**
1.
```
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small",
activation_dropout=0.1,
attention_dropout=0.1,
dropout=0.1)
```
2.
```
os.environ['RANK'] = '0'
os.environ['WORLD_SIZE'] = '2'
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
```
3.
```
training_args = Seq2SeqTrainingArguments(
output_dir="/home/whisper_small_16_2_outputs/",
per_device_train_batch_size=8,
gradient_accumulation_steps=2,
learning_rate=1e-5,
warmup_steps=936,
fp16=True,
local_rank=0,
save_strategy='steps',
evaluation_strategy="steps",
gradient_checkpointing=True,
predict_with_generate=True,
generation_max_length=210,
save_steps=600,
eval_steps=300,
logging_steps=300,
num_train_epochs=30,
load_best_model_at_end=True,
metric_for_best_model="wer",
greater_is_better=False,
save_total_limit=5,
fsdp='full_shard',
fsdp_config='/home/fsdp_config.json'
)
```
4.
```
trainer = Seq2SeqTrainer(
args=training_args,
model=model,
train_dataset=train_ds,
eval_dataset=val_ds,
data_collator=data_collator,
compute_metrics=compute_metrics,
tokenizer=processor.feature_extractor,
)
```
5. **the fsdp_config json file:**
{
"fsdp_config": {
"fsdp_backward_prefetch_policy": "backward_pre",
"fsdp_forward_prefetch": false,
"limit_all_gathers": true,
"xla": false}
}
**and this is the error i’ve got:**
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[21], line 1
----> 1 training_args = Seq2SeqTrainingArguments(
2 output_dir="/home/whisper_small_16_2_outputs/",
3 per_device_train_batch_size=8,
4 gradient_accumulation_steps=2,
5 learning_rate=1e-5,
6 warmup_steps=936,
7 fp16=True,
8 local_rank=0,
9 save_strategy='steps',
10 evaluation_strategy="steps",
11 gradient_checkpointing=True,
12 predict_with_generate=True,
13 generation_max_length=210,
14 save_steps=600,
15 eval_steps=300,
16 logging_steps=300,
17 num_train_epochs=30,
18 load_best_model_at_end=True,
19 metric_for_best_model="wer",
20 greater_is_better=False,
21 save_total_limit=5,
22 fsdp='full_shard',
23 fsdp_config='/home/fsdp_config.json'
24 )
File <string>:115, in __init__(self, output_dir, overwrite_output_dir, do_train, do_eval, do_predict, evaluation_strategy, prediction_loss_only, per_device_train_batch_size, per_device_eval_batch_size, per_gpu_train_batch_size, per_gpu_eval_batch_size, gradient_accumulation_steps, eval_accumulation_steps, eval_delay, learning_rate, weight_decay, adam_beta1, adam_beta2, adam_epsilon, max_grad_norm, num_train_epochs, max_steps, lr_scheduler_type, warmup_ratio, warmup_steps, log_level, log_level_replica, log_on_each_node, logging_dir, logging_strategy, logging_first_step, logging_steps, logging_nan_inf_filter, save_strategy, save_steps, save_total_limit, save_safetensors, save_on_each_node, no_cuda, use_mps_device, seed, data_seed, jit_mode_eval, use_ipex, bf16, fp16, fp16_opt_level, half_precision_backend, bf16_full_eval, fp16_full_eval, tf32, local_rank, xpu_backend, tpu_num_cores, tpu_metrics_debug, debug, dataloader_drop_last, eval_steps, dataloader_num_workers, past_index, run_name, disable_tqdm, remove_unused_columns, label_names, load_best_model_at_end, metric_for_best_model, greater_is_better, ignore_data_skip, sharded_ddp, fsdp, fsdp_min_num_params, fsdp_config, fsdp_transformer_layer_cls_to_wrap, deepspeed, label_smoothing_factor, optim, optim_args, adafactor, group_by_length, length_column_name, report_to, ddp_find_unused_parameters, ddp_bucket_cap_mb, dataloader_pin_memory, skip_memory_metrics, use_legacy_prediction_loop, push_to_hub, resume_from_checkpoint, hub_model_id, hub_strategy, hub_token, hub_private_repo, gradient_checkpointing, include_inputs_for_metrics, fp16_backend, push_to_hub_model_id, push_to_hub_organization, push_to_hub_token, mp_parameters, auto_find_batch_size, full_determinism, torchdynamo, ray_scope, ddp_timeout, torch_compile, torch_compile_backend, torch_compile_mode, sortish_sampler, predict_with_generate, generation_max_length, generation_num_beams, generation_config)
File ~/.local/lib/python3.10/site-packages/transformers/training_args.py:1259, in TrainingArguments.__post_init__(self)
1253 if version.parse(version.parse(torch.__version__).base_version) == version.parse("2.0.0") and self.fp16:
1254 raise ValueError("--optim adamw_torch_fused with --fp16 requires PyTorch>2.0")
1256 if (
1257 self.framework == "pt"
1258 and is_torch_available()
-> 1259 and (self.device.type != "cuda")
1260 and (get_xla_device_type(self.device) != "GPU")
1261 and (self.fp16 or self.fp16_full_eval)
1262 ):
1263 raise ValueError(
1264 "FP16 Mixed precision training with AMP or APEX (`--fp16`) and FP16 half precision evaluation"
1265 " (`--fp16_full_eval`) can only be used on CUDA devices."
1266 )
1268 if (
1269 self.framework == "pt"
1270 and is_torch_available()
(...)
1275 and (self.bf16 or self.bf16_full_eval)
1276 ):
File ~/.local/lib/python3.10/site-packages/transformers/training_args.py:1694, in TrainingArguments.device(self)
1690 """
1691 The device used by this process.
1692 """
1693 requires_backends(self, ["torch"])
-> 1694 return self._setup_devices
File ~/.local/lib/python3.10/site-packages/transformers/utils/generic.py:54, in cached_property.__get__(self, obj, objtype)
52 cached = getattr(obj, attr, None)
53 if cached is None:
---> 54 cached = self.fget(obj)
55 setattr(obj, attr, cached)
56 return cached
File ~/.local/lib/python3.10/site-packages/transformers/training_args.py:1679, in TrainingArguments._setup_devices(self)
1677 torch.distributed.init_process_group(backend=self.xpu_backend, timeout=self.ddp_timeout_delta)
1678 else:
-> 1679 torch.distributed.init_process_group(backend="nccl", timeout=self.ddp_timeout_delta)
1680 device = torch.device("cuda", self.local_rank)
1681 self._n_gpu = 1
File ~/.local/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:920, in init_process_group(backend, init_method, timeout, world_size, rank, store, group_name, pg_options)
916 barrier()
917 else:
918 # Use store based barrier here since barrier() used a bunch of
919 # default devices and messes up NCCL internal state.
--> 920 _store_based_barrier(rank, store, timeout)
File ~/.local/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:459, in _store_based_barrier(rank, store, timeout)
456 log_time = time.time()
458 if timedelta(seconds=(time.time() - start)) > timeout:
--> 459 raise RuntimeError(
460 "Timed out initializing process group in store based barrier on "
461 "rank: {}, for key: {} (world_size={}, worker_count={}, timeout={})".format(
462 rank, store_key, world_size, worker_count, timeout
463 )
464 )
466 logger.info(
467 f"Rank {rank}: Completed store-based barrier for key:{store_key} with {world_size} nodes."
468 )
RuntimeError: Timed out initializing process group in store based barrier on rank: 0, for key: store_based_barrier_key:1 (world_size=2, worker_count=1, timeout=0:30:00)
```
### Expected behavior
```shell
i just want to be able using bigger batch size for fine-tuning Whisper-small model with the GPUs i've mentioned. After a little researching, I found that I had to use FSDP or DDP but i got just errors! can anyone help me?!
```
### Checklist
- [X] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers))
- [X] I checked if a related official extension example runs on my machine.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23651/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23651/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23650
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23650/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23650/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23650/events
|
https://github.com/huggingface/transformers/pull/23650
| 1,719,515,711 |
PR_kwDOCUB6oc5RAhwu
| 23,650 |
Fix accelerate logger bug
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I'll try looking into it more, however also note that for logging you should use the `PartialState` not accelerator :)",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes a slow test that requires accelerate t[hat is currently failing](https://github.com/huggingface/transformers/actions/runs/5035387610/jobs/9030918234) with the following error:
```bash
RuntimeError: You must initialize the accelerate state by calling either `PartialState()` or `Accelerator()` before using the logging utility.
```
I suspect this comes from https://github.com/huggingface/accelerate/pull/1446
The fix seems to be to first initialize a dummy accelerator. However I couldn't reproduce the issue with a simpler snippet. It seems to appear only when a model is created together with CPU offloading + multi GPU.
I tried this snippet:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "bigscience/bloom-1b7"
device_map = {
"transformer.word_embeddings": 0,
"transformer.word_embeddings_layernorm": 0,
"lm_head": 0,
"transformer.h.0": "cpu",
"transformer.h.1": "cpu",
"transformer.h.2": 0,
"transformer.h.3": 0,
"transformer.h.4": 0,
"transformer.h.5": 0,
"transformer.h.6": 0,
"transformer.h.7": 0,
"transformer.h.8": 0,
"transformer.h.9": 1,
"transformer.h.10": 0,
"transformer.h.11": 1,
"transformer.h.12": 0,
"transformer.h.13": 0,
"transformer.h.14": 1,
"transformer.h.15": 0,
"transformer.h.16": 0,
"transformer.h.17": 1,
"transformer.h.18": 1,
"transformer.h.19": 0,
"transformer.h.20": 1,
"transformer.h.21": 1,
"transformer.h.22": 0,
"transformer.h.23": 0,
"transformer.ln_f": 1,
}
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map=device_map,
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
input_text = "hello"
encoded_input = tokenizer(input_text, return_tensors="pt")
# Check the exactness of the results
output_parallel = model.generate(input_ids=encoded_input["input_ids"].to(0), max_new_tokens=10)
```
But it didn't raised any error, strangely, the error is only raised when the test is run.
I would appreciate any insight @muellerzr @sgugger 🙏
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23650/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23650/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23650",
"html_url": "https://github.com/huggingface/transformers/pull/23650",
"diff_url": "https://github.com/huggingface/transformers/pull/23650.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23650.patch",
"merged_at": 1684762787000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23648
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23648/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23648/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23648/events
|
https://github.com/huggingface/transformers/issues/23648
| 1,719,294,934 |
I_kwDOCUB6oc5melvW
| 23,648 |
Unexpected padding behaviour of `ClapFeatureExtractor`
|
{
"login": "anmol-wiai",
"id": 70334743,
"node_id": "MDQ6VXNlcjcwMzM0NzQz",
"avatar_url": "https://avatars.githubusercontent.com/u/70334743?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anmol-wiai",
"html_url": "https://github.com/anmol-wiai",
"followers_url": "https://api.github.com/users/anmol-wiai/followers",
"following_url": "https://api.github.com/users/anmol-wiai/following{/other_user}",
"gists_url": "https://api.github.com/users/anmol-wiai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anmol-wiai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anmol-wiai/subscriptions",
"organizations_url": "https://api.github.com/users/anmol-wiai/orgs",
"repos_url": "https://api.github.com/users/anmol-wiai/repos",
"events_url": "https://api.github.com/users/anmol-wiai/events{/privacy}",
"received_events_url": "https://api.github.com/users/anmol-wiai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] |
[
"cc @sanchit-gandhi ",
"Thanks for the clean write-up @anmol-wiai! \r\n\r\nI think splitting arguments make sense for the processor classes (send one set of args to the feature extractor, another set of args to the tokeniser). Previously, I overwrote the most common feature extractor arg to fall out of the shared arg logic: https://github.com/huggingface/transformers/pull/20022 But clearly we need something robust to handle the different sets of possible inputs args for the feature extractor and tokeniser respectively\r\n\r\nProbably what we need to do here to prevent breaking changes is have three input arguments:\r\n1. `feature_extractor_kwargs` -> get sent to the feature extractor\r\n2. `tokenizer_kwargs` -> get sent to the tokeniser\r\n3. `kwargs` -> get sent to both (needed to prevent a breaking change)\r\n\r\nWDYT about this design @amyeroberts @anmol-wiai?",
"Hi @sanchit-gandhi,\r\n\r\n> Probably what we need to do here to prevent breaking changes is have three input arguments:\r\n> \r\n> feature_extractor_kwargs -> get sent to the feature extractor\r\n> tokenizer_kwargs -> get sent to the tokeniser\r\n> kwargs -> get sent to both (needed to prevent a breaking change)\r\n> \r\nI think this is good. One small thing is that `kwargs` can be a little ambiguous. \r\n\r\nIf I have two functions like this:\r\n```python\r\n# version 1\r\ndef run_func1_and_func2_v1(func1_arg, func2_arg, **kwargs):\r\n ...\r\n\r\n# version 2\r\ndef run_func1_and_func2_v2(func1_arg, func2_arg, func1_kwargs, func2_kwargs, **kwargs):\r\n ...\r\n```\r\nWhile it's expected than in version 1, `kwargs` could be passed on to subsequent function calls inside `run_func1_and_func2_v1`, is it so obvious in version 2 as well given that `run_func1_and_func2_v2` already has `func1_kwargs` and `func2_kwargs` arguments? \r\nWould renaming `kwargs` to something like `shared_kwargs`, or `common_kwargs` be more clear? In any case, you should consider deprecating `kwargs`.\r\n\r\n---\r\n\r\nAnother issue that I flagged is about what should `ClapFeatureExtractor` do when it is passed `padding=True` argument. (Sorry for raising two things in a single issue. I got a little confused. I can open a separate issue if that helps.)\r\n\r\nIf you look at how `padding` argument works for `tokenizer` (code around [this](https://github.com/huggingface/transformers/blob/04ab5605fbb4ef207b10bf2772d88c53fc242e83/src/transformers/tokenization_utils_base.py#LL2366C1-L2366C1) line), it looks something like:\r\n```python\r\nif padding is not False:\r\n if padding is True:\r\n # use default strategy - LONGEST\r\n ...\r\n elif not isinstance(padding, PaddingStrategy):\r\n # note that this will raise an error if you pass an non-allowed padding - like \"PAD_WITH_HUNDRED\"\r\n padding_strategy = PaddingStrategy(padding)\r\n elif isinstance(padding, PaddingStrategy):\r\n # do something\r\n ...\r\nelse:\r\n # do not pad\r\n ...\r\n```\r\n\r\nIn contrast to that, the `ClapFeatureExtractor`'s `padding` argument is used like this (code around [this](https://github.com/huggingface/transformers/blob/04ab5605fbb4ef207b10bf2772d88c53fc242e83/src/transformers/models/clap/feature_extraction_clap.py#LL242C20-L242C27) line):\r\n```python\r\nif padding == \"repeat\":\r\n # do something\r\n ...\r\nif padding == \"repeatpad\": # default value of padding = \"repeatpad\"\r\n # do something\r\n ...\r\n\r\n# zero pad otherwise - whatever the value of padding is!\r\nwaveform = np.pad(waveform, (0, max_length - waveform.shape[0]), mode=\"constant\", constant_values=0)\r\n```\r\n\r\nTry this code snippet for example:\r\n```python\r\nfrom datasets import load_dataset\r\nfrom transformers import AutoProcessor\r\n\r\n# load data\r\ndataset = load_dataset(\"ashraq/esc50\")\r\naudio_sample = dataset[\"train\"][\"audio\"][0][\"array\"]\r\n\r\n# load data processor\r\nprocessor = AutoProcessor.from_pretrained(\"laion/clap-htsat-unfused\")\r\n\r\n# pre-process data\r\ninputs1 = processor.feature_extractor(audio_sample, return_tensors=\"pt\", padding=True)\r\ninputs2 = processor.feature_extractor(audio_sample, return_tensors=\"pt\", padding=\"PAD_WITH_HUNDRED\")\r\n\r\nprint((inputs1[\"input_features\"] != inputs2[\"input_features\"]).sum())\r\n# Output: tensor(0)\r\n```\r\n\r\nI think this behaviour is unexpected and this should be made consistent with `tokenizer`.",
"> Would renaming `kwargs` to something like `shared_kwargs`, or `common_kwargs` be more clear?\r\n\r\nI think `kwargs` would be fine if we document the function properly, but don't feel too strongly about keeping the name so fine with switching to `shared_kwargs` if we think it adds clarity over a proper docstring\r\n\r\n> I think this behaviour is unexpected and this should be made consistent with `tokenizer`\r\n\r\nThanks for the clear explanation - the code snippets you've provided are very clean! I think so too - the CLAP processor functionality is a bit unexpected here. Shall we tackle this in a separate PR to more closely follow the `tokenizer` logic?",
"> fine with switching to `shared_kwargs` if we think it adds clarity over a proper docstring\r\n\r\nI think it does but I am okay with whatever you prefer.\r\n\r\n> Shall we tackle this in a separate PR to more closely follow the tokenizer logic?\r\n\r\nYes, these two things are independent and can be dealt with in separate PRs.",
"Awesome @anmol-wiai! Would you like to open a PR to fix one or both of these issues? Happy to guide you through the integration process, sounds like you have a good idea of what needs to be fixed with getting the CLAP processor consistent with the tokenizer 👍",
"Hi @sanchit-gandhi, I'm currently a little busy for the next two weeks. However, I can work on this afterwards if that timeframe suits you.",
"Sounds perfect - feel free to tag me in a PR whenever you get the chance to look at this and I'll get you a review! More than happy to answer any questions / queries here or on the PR, so just ping me if you get stuck 🤗",
"Still think this would be a nice way of clarifying the CLAP feature extractor kwargs! It's all yours @anmol-wiai if you're still up for it!",
"Hey @sanchit-gandhi! Sorry, I have been a little busy lately. I was planning to work on it this weekend. Let me create a PR and then I will reach out to you. ",
"PR still in progress at #24503 "
] | 1,684 | 1,706 | null |
NONE
| null |
### System Info
- `transformers` version: 4.29.2
- Platform: Linux-4.15.0-204-generic-x86_64-with-glibc2.31
- Python version: 3.10.10
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Adding `padding=True` argument to `ClapFeatureExtractor` changes the padding strategy from `repeatpad`, which is the default, to constant padding ([code](https://github.com/huggingface/transformers/blob/04ab5605fbb4ef207b10bf2772d88c53fc242e83/src/transformers/models/clap/feature_extraction_clap.py#L248)).
Code adapted from: https://huggingface.co/docs/transformers/model_doc/clap#transformers.ClapModel.forward.example
```python
from datasets import load_dataset
from transformers import AutoProcessor
# load data
dataset = load_dataset("ashraq/esc50")
audio_sample = dataset["train"]["audio"][0]["array"]
# load data processor
processor = AutoProcessor.from_pretrained("laion/clap-htsat-unfused")
# pre-process data
inputs1 = processor.feature_extractor(audio_sample, return_tensors="pt")
inputs2 = processor.feature_extractor(audio_sample, return_tensors="pt", padding=True)
print((inputs1["input_features"] - inputs2["input_features"]).max())
# Output: tensor(119.4260)
```
This becomes a problem, for instance, when using `ClapProcessor`. `ClapProcessor` shares `kwargs` between the `tokenizer` and the `feature_extractor` ([code](https://github.com/huggingface/transformers/blob/04ab5605fbb4ef207b10bf2772d88c53fc242e83/src/transformers/models/clap/processing_clap.py#LL87C9-L87C9)). When using text inputs of different length, you need to pass `padding=True` argument to the `tokenizer`, but doing so changes the behaviour of the `feature_extractor`.
### Expected behavior
1. Either don't allow `padding=True` argument. Assert its value to be one of the allowed values - `repeatpad`, `repeat`, and `pad` in case of `ClapFeatureExtractor`.
2. Or, use the default padding strategy if `padding=True`.
As for sharing `kwargs`, I don't think that's a good idea. Would having two arguments, `tokenizer_kwargs` and `feature_extractor_kwargs` be better?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23648/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23648/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/23647
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23647/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23647/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23647/events
|
https://github.com/huggingface/transformers/issues/23647
| 1,719,258,680 |
I_kwDOCUB6oc5mec44
| 23,647 |
Wandb sweeps integraition: custom objective
|
{
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] |
[
"We are not maintaining the Wandb integration, so you should ping the Wandb team here :-) ",
"Hey @BramVanroy thanks for raising this issue. Here's my response from top of my head:\r\n\r\nScenario 1:\r\n> the objective is to maximize the sum of sari and rougeLsum\r\n\r\nAre you logging this sum to wandb? Say you are logging it to wandb with name `eval/sari_rougelsum` then passing `metric: eval/sari_rougelsum` should work.\r\n\r\nScenario 2:\r\nYou are logging the two metrics separately. Sweeps doesn't support multi-objective optimization today. Say you are using the grid or random search you can use the parallel coordinate plot generated in the sweeps dashboard to find a set of hyperparameters that can optimize two objectives:\r\n\r\n\r\nIn this proxy example say `eval/val_accuracy` and `eval_loss` are two objectives. I can then use this parallel coordinate plot to find range of hparams that maximise the `val_accuracy` and minimise `eval_loss`.\r\n<img width=\"1355\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/31141479/1b8b62a4-4be0-4540-b331-f08801708e67\">\r\n\r\n> More clarity about the relationship between a wandb sweep configuration and how the trainer uses wandb as a backend, and the importance of metric and direciton arguments vs. compute_objective.\r\n\r\nI have shared this request internally. Thanks.",
"Hi @ayulockin, thanks for the quick reply!\r\n\r\n> Are you logging this sum to wandb? Say you are logging it to wandb with name eval/sari_rougelsum then passing metric: eval/sari_rougelsum should work.\r\n\r\nCan you tell me how to log an extra metric in the trainer or directly in this training script (I'm using a modified version of this): https://github.com/huggingface/transformers/blob/main/examples/pytorch/summarization/run_summarization.py\r\n\r\n> Sweeps doesn't support multi-objective optimization today.\r\n\r\nDoes that mean that the custom objective function that I pass to the huggingface trainer is NOT used by `wandb`?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@ayulockin Any update on this?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Kind reminder :) Bump @ayulockin ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Final attempt @ayulockin ",
"Hey @BramVanroy, serious apologies for not responding to your pings. Past few weeks were crazy and I couldn't get to the GitHub notifications. Apologies again. :(\r\n\r\nI will be working on it today and will let you know in a couple of hours. Thank you for the patience and apologies again. 🙏 ",
"Hey @BramVanroy, will it be possible for you to share a colab notebook (if you have one handy) with your code. I am not able to reproduce your exact configuration.",
"@ayulockin I do my training on the cluster with full scripts so I can't really share those. But can you tell me how we should typically add a custom metric (like in the case above `metrics[\"eval_rougeLsum\"] + metrics[\"eval_sari\"]`) and then use that as a parameter to maximize in the sweep?",
"Say this is the `wandb_hp_space`.\r\n\r\n```\r\nwandb_hp_space = {\r\n 'method': 'bayes',\r\n 'name': 'sweep',\r\n 'metric': {\r\n 'goal': 'maximize', # notice here\r\n 'name': 'custom_metric' # notice here\r\n },\r\n 'parameters': {\r\n 'batch_size': {'values': [16, 32, 64]},\r\n 'epochs': {'values': [5, 10, 15]},\r\n 'lr': {'max': 0.1, 'min': 0.0001}\r\n }\r\n}\r\n```\r\n\r\nYou will have to log the `custom_metric` somewhere in your training script - `wandb.log({\"custom_metric\": metrics[\"eval_rougeLsum\"] + metrics[\"eval_sari\"]})`.\r\n\r\nWill this help?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Replying so that the issue is not closed.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@ayulockin Sorry from my end, I lost track of this. I will try this now and see what comes out!"
] | 1,684 | 1,707 | null |
COLLABORATOR
| null |
### System Info
- `transformers` version: 4.29.2
- Platform: Linux-5.14.0-162.6.1.el9_1.0.1.x86_64-x86_64-with-glibc2.34
- Python version: 3.10.10
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu117 (True) -- using torch in my experiments
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.10 (cpu)
- Jax version: 0.4.10
- JaxLib version: 0.4.10
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I followed [this guide](https://huggingface.co/docs/transformers/hpo_train) to use wandb sweeps with the trainer by modifying the summarization scripts slightly. Below are the hp_space, model_init, and hyperparameter_search commands that I use.
Most notably, the objective is to maximize the sum of sari and rougeLsum.
```python
def wandb_hp_space(trial):
return {
"method": "bayes",
"metric": {
"name": "objective",
"goal": "minimize" if hyperopt_args.hparam_optimize_for_loss else "maximize"
},
"parameters": {
"num_train_epochs": {"min": hyperopt_args.hparam_epoch_min, "max": hyperopt_args.hparam_epoch_max}
},
"run_cap": hyperopt_args.hparam_max_trials
}
def model_init(trial):
return AutoModelForSeq2SeqLM.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
)
def hparam_objective(metrics: Dict[str, float]) -> float:
metrics = copy.deepcopy(metrics)
if hyperopt_args.hparam_optimize_for_loss:
return metrics["eval_loss"]
return metrics["eval_rougeLsum"] + metrics["eval_sari"]
best_trial = trainer.hyperparameter_search(
compute_objective=hparam_objective,
backend="wandb",
# I think that this is only used to set the column in the sweep chart but does not mean that we use
# this metric only for optimization. That is what the hparam_objective is for?
metric="eval/sari",
hp_space=wandb_hp_space,
n_trials=hyperopt_args.hparam_max_trials,
direction="minimize" if hyperopt_args.hparam_optimize_for_loss else "maximize",
)
```
The problem is that when I look at the wandb interface at the generated sweep config, it looks like this:
```yaml
method: bayes
metric:
goal: maximize
name: eval/sari
parameters:
num_train_epochs:
distribution: int_uniform
max: 30
min: 2
run_cap: 16
```
So the generated sweep config includes `eval/sari` as the metric name because I passed it in the `hyperparameter_search`. But as you can read in the comment, I thought this was only for the wandb visualization but now I am not so sure. When I leave out the `metric` keyword (as in the example), wandb seems to fallback to `eval/loss`.
```yaml
method: bayes
metric:
goal: maximize
name: eval/loss
parameters:
num_train_epochs:
distribution: int_uniform
max: 30
min: 2
run_cap: 16
```
My worry here is the disconnect between the generated sweep config and the custom objective function. What is wandb optimizing now? My custom objective function (sari+rougeLsum) or `metric` that is passed to `hyperparameter_search` for `direction`?
### Expected behavior
More clarity about the relationship between a wandb sweep configuration and how the trainer uses wandb as a backend, and the importance of `metric` and `direciton` arguments vs. `compute_objective`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23647/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23647/timeline
|
reopened
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23646
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23646/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23646/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23646/events
|
https://github.com/huggingface/transformers/pull/23646
| 1,718,992,786 |
PR_kwDOCUB6oc5Q-wdg
| 23,646 |
Remove erroneous `img` closing tag
|
{
"login": "xenova",
"id": 26504141,
"node_id": "MDQ6VXNlcjI2NTA0MTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xenova",
"html_url": "https://github.com/xenova",
"followers_url": "https://api.github.com/users/xenova/followers",
"following_url": "https://api.github.com/users/xenova/following{/other_user}",
"gists_url": "https://api.github.com/users/xenova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xenova/subscriptions",
"organizations_url": "https://api.github.com/users/xenova/orgs",
"repos_url": "https://api.github.com/users/xenova/repos",
"events_url": "https://api.github.com/users/xenova/events{/privacy}",
"received_events_url": "https://api.github.com/users/xenova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
Removes erroneous closing tag for the "support" image tag for the Portuguese documentation (https://huggingface.co/docs/transformers/v4.29.1/pt/index)
Currently, there is an extra "</img>" after the image, which is causing issues with doc-builder

<!-- Remove if not applicable -->
Fixes issue in https://github.com/huggingface/transformers/pull/23625. Doc builder logs:
```bash
[vite-plugin-svelte] /tmp/tmpem9ikhqv/kit/src/routes/index.mdx:93:241 <img> is a void element and cannot have children, or a closing tag
file: /tmp/tmpem9ikhqv/kit/src/routes/index.mdx:93:241
91 |
92 | <a target="_blank" href="https://huggingface.co/support">
93 | <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="width: 100%; max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"></img>
^
94 | </a>
95 | <h2 class="relative group">
> /tmp/tmpem9ikhqv/kit/src/routes/index.mdx:93:241 <img> is a void element and cannot have children, or a closing tag
at error (file:///tmp/tmpem9ikhqv/kit/node_modules/svelte/compiler.mjs:17691:19)
at Parser$1.error (file:///tmp/tmpem9ikhqv/kit/node_modules/svelte/compiler.mjs:17767:9)
at tag (file:///tmp/tmpem9ikhqv/kit/node_modules/svelte/compiler.mjs:16765:20)
at new Parser$1 (file:///tmp/tmpem9ikhqv/kit/node_modules/svelte/compiler.mjs:17726:21)
at parse$3 (file:///tmp/tmpem9ikhqv/kit/node_modules/svelte/compiler.mjs:17858:20)
at compile (file:///tmp/tmpem9ikhqv/kit/node_modules/svelte/compiler.mjs:31871:17)
at compileSvelte2 (file:///tmp/tmpem9ikhqv/kit/node_modules/@sveltejs/vite-plugin-svelte/dist/index.js:319:20)
at async Object.transform (file:///tmp/tmpem9ikhqv/kit/node_modules/@sveltejs/vite-plugin-svelte/dist/index.js:1602:23)
at async transform (/tmp/tmpem9ikhqv/kit/node_modules/rollup/dist/shared/rollup.js:21965:16)
at async ModuleLoader.addModuleSource (/tmp/tmpem9ikhqv/kit/node_modules/rollup/dist/shared/rollup.js:22191:30)
```
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@sgugger, @stevhliu, @MKhalusova
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23646/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23646/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23646",
"html_url": "https://github.com/huggingface/transformers/pull/23646",
"diff_url": "https://github.com/huggingface/transformers/pull/23646.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23646.patch",
"merged_at": 1684762106000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23645
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23645/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23645/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23645/events
|
https://github.com/huggingface/transformers/pull/23645
| 1,718,886,432 |
PR_kwDOCUB6oc5Q-ZZz
| 23,645 |
Add support for non-rust implemented tokenization for `__getitem__` m…
|
{
"login": "jacklanda",
"id": 54089835,
"node_id": "MDQ6VXNlcjU0MDg5ODM1",
"avatar_url": "https://avatars.githubusercontent.com/u/54089835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jacklanda",
"html_url": "https://github.com/jacklanda",
"followers_url": "https://api.github.com/users/jacklanda/followers",
"following_url": "https://api.github.com/users/jacklanda/following{/other_user}",
"gists_url": "https://api.github.com/users/jacklanda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jacklanda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jacklanda/subscriptions",
"organizations_url": "https://api.github.com/users/jacklanda/orgs",
"repos_url": "https://api.github.com/users/jacklanda/repos",
"events_url": "https://api.github.com/users/jacklanda/events{/privacy}",
"received_events_url": "https://api.github.com/users/jacklanda/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23645). All of your documentation changes will be reflected on that endpoint.",
"Hi @jacklanda, thanks for opening this PR! \r\n\r\nCould you add some more detail to the PR description about the issue this is addressing or feature this is adding? \r\n\r\ncc @younesbelkada @ArthurZucker ",
"> Hi @jacklanda, thanks for opening this PR!\r\n> \r\n> Could you add some more detail to the PR description about the issue this is addressing or feature this is adding?\r\n> \r\n> cc @younesbelkada @ArthurZucker\r\n\r\nHi @amyeroberts , this PR is going to add a support for the usage scenario of \"getting a slice from the batch-tokenized sequences\". \r\n\r\nWithout this PR, it seems to raise `KeyError` with the following message `KeyError: 'Indexing with integers (to access backend Encoding for a given batch index) is not available when using Python based tokenizers'`\r\n\r\nP.S. The above scenario could be reproduced by using some models new uploaded but not support to Rust-implemented tokenization, such as `fnlp/moss-moon-003-sft`. Also we can run a examplar script for reproducing this issue:\r\n\r\n```python3\r\nfrom transformers import AutoTokenizer\r\n\r\ntok = AutoTokenizer.from_pretrained(\"fnlp/moss-moon-003-sft\", trust_remote_code=True)\r\ntok.add_special_tokens({\"pad_token\": \"[PAD]\"})\r\n\r\ntexts = [\"Today is a good day!\", \"It's a good idea!\", \"How's going?\"]\r\nbatch_tok = tok(texts, padding=True)\r\nprint(batch_tok[0:3]) # report `KeyError` here\r\n```\r\n\r\nAll in all, I think it seems useful to implement `__getitem__` method behind it in Python side :)",
"Why this auto pipeline always failed? ",
"@jacklanda By 'auto pipeline' are you referring to the CI test suite? If so, it seems there's two main issues causing failing CI runs: \r\n* Quality style check. Running `make style` and pushing the changes should resolve these.\r\n* Unexpected change in behaviour in one of our custom tokenizers. It seems that some of the tests for `JumanppTokenizer` are now failing as a result of the changes in this PR. \r\n\r\nFor the CI runs, you should be able to click on `Details` for each of the CI runs to see the full output including error messages / failed test reports. Let us know if this isn't working for you. \r\n\r\n",
"> \r\n\r\nYes, I mean `CI test suite`.\r\n\r\n- Where to run `make style`?\r\n- How to fix this? Should I reopen another PR for this commit?\r\n\r\nThanks!",
"@jacklanda \r\n> Where to run make style?\r\n\r\nIn the top level of your local `transformers` fork. This and more information about how to contribute to the library can found in the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests).\r\n\r\n> How to fix this? Should I reopen another PR for this commit?\r\n\r\nThe changes should be part of this PR and pushed to this branch. However, I can see that this PR has been opened on the `main` branch of your fork, and not a new feature branch. For this PR we can still merge the changes, however if you wish to contribute again to transformers using this fork it may become tricky to manage conflicts. I would suggest deleting this branch once this PR is merged, fetching `main` from the parent repository and then working from feature branches. Again, see the contributor guidelines for details on typical workflows for PRs. "
] | 1,684 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
…ethod.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23645/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23645/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23645",
"html_url": "https://github.com/huggingface/transformers/pull/23645",
"diff_url": "https://github.com/huggingface/transformers/pull/23645.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23645.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23644
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23644/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23644/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23644/events
|
https://github.com/huggingface/transformers/issues/23644
| 1,718,879,206 |
I_kwDOCUB6oc5mdAPm
| 23,644 |
save all custom files in the meanwhile
|
{
"login": "JaheimLee",
"id": 18062264,
"node_id": "MDQ6VXNlcjE4MDYyMjY0",
"avatar_url": "https://avatars.githubusercontent.com/u/18062264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JaheimLee",
"html_url": "https://github.com/JaheimLee",
"followers_url": "https://api.github.com/users/JaheimLee/followers",
"following_url": "https://api.github.com/users/JaheimLee/following{/other_user}",
"gists_url": "https://api.github.com/users/JaheimLee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JaheimLee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JaheimLee/subscriptions",
"organizations_url": "https://api.github.com/users/JaheimLee/orgs",
"repos_url": "https://api.github.com/users/JaheimLee/repos",
"events_url": "https://api.github.com/users/JaheimLee/events{/privacy}",
"received_events_url": "https://api.github.com/users/JaheimLee/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @JaheimLee, thanks for raising this issue. \r\n\r\nWhich files specifically are being referred to when you say \"custom files\"? ",
"> Hi @JaheimLee, thanks for raising this issue.\r\n> \r\n> Which files specifically are being referred to when you say \"custom files\"?\r\n\r\nLike the model files which is not merged into transformers. For example:\r\nhttps://huggingface.co/mosaicml/mpt-7b-instruct/blob/main/configuration_mpt.py\r\nhttps://huggingface.co/mosaicml/mpt-7b-instruct/blob/main/adapt_tokenizer.py\r\nhttps://huggingface.co/mosaicml/mpt-7b-instruct/blob/main/modeling_mpt.py\r\n...",
"The `save_pretrained` function indicates a reference to the original repo so that you get the updates they push there automatically. You can still use your model with `from_pretrained` and push it to the Hub, and it will always work and keep the latest code version from their side.\r\n\r\nIf you really want the modeling files to make changes, you can find them all in the cache: `~/.cache/huggingface/hub/models--mosaicml--mpt-7b-instruct`.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,688 | 1,688 |
NONE
| null |
### Feature request
Many models in your huggingface hub have their custom files, like [mpt](https://huggingface.co/mosaicml/mpt-7b-instruct). However, your `save_pretrained` func can't save them all. Is it possible to save all of them at the same time?
### Motivation
It's pretty useful when you fine-tune the model and want to save it in a new directory.
### Your contribution
no
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23644/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23644/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23643
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23643/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23643/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23643/events
|
https://github.com/huggingface/transformers/pull/23643
| 1,718,856,181 |
PR_kwDOCUB6oc5Q-TAO
| 23,643 |
[wip: testing doc-builder]
|
{
"login": "xenova",
"id": 26504141,
"node_id": "MDQ6VXNlcjI2NTA0MTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xenova",
"html_url": "https://github.com/xenova",
"followers_url": "https://api.github.com/users/xenova/followers",
"following_url": "https://api.github.com/users/xenova/following{/other_user}",
"gists_url": "https://api.github.com/users/xenova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xenova/subscriptions",
"organizations_url": "https://api.github.com/users/xenova/orgs",
"repos_url": "https://api.github.com/users/xenova/repos",
"events_url": "https://api.github.com/users/xenova/events{/privacy}",
"received_events_url": "https://api.github.com/users/xenova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Unfortunately trying to use a custom commit hash just skips the doc-building (since it can't find it). Will wait for @mishig25 to update his branch :) ",
"Testing continued at https://github.com/huggingface/transformers/pull/23625"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# DO NOT MERGE
Testing https://github.com/huggingface/doc-builder/pull/373
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23643/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23643/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23643",
"html_url": "https://github.com/huggingface/transformers/pull/23643",
"diff_url": "https://github.com/huggingface/transformers/pull/23643.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23643.patch",
"merged_at": null
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.