url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/23301
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23301/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23301/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23301/events
|
https://github.com/huggingface/transformers/pull/23301
| 1,706,051,650 |
PR_kwDOCUB6oc5QThNc
| 23,301 |
Agents extras
|
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Approved as seen offline"
] | 1,683 | 1,683 | 1,683 |
MEMBER
| null |
Adds an extras for `agents`.
Fix https://github.com/huggingface/transformers/issues/23298
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23301/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23301/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23301",
"html_url": "https://github.com/huggingface/transformers/pull/23301",
"diff_url": "https://github.com/huggingface/transformers/pull/23301.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23301.patch",
"merged_at": 1683829552000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23300
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23300/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23300/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23300/events
|
https://github.com/huggingface/transformers/pull/23300
| 1,706,033,312 |
PR_kwDOCUB6oc5QTdUh
| 23,300 |
Add gradient_checkpointing parameter to FlaxWhisperEncoder
|
{
"login": "raghavanone",
"id": 115454562,
"node_id": "U_kgDOBuGyYg",
"avatar_url": "https://avatars.githubusercontent.com/u/115454562?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raghavanone",
"html_url": "https://github.com/raghavanone",
"followers_url": "https://api.github.com/users/raghavanone/followers",
"following_url": "https://api.github.com/users/raghavanone/following{/other_user}",
"gists_url": "https://api.github.com/users/raghavanone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/raghavanone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raghavanone/subscriptions",
"organizations_url": "https://api.github.com/users/raghavanone/orgs",
"repos_url": "https://api.github.com/users/raghavanone/repos",
"events_url": "https://api.github.com/users/raghavanone/events{/privacy}",
"received_events_url": "https://api.github.com/users/raghavanone/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Merging as the failing tests are also failing on main and not related to this PR"
] | 1,683 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
Ref https://github.com/huggingface/transformers/pull/23173#discussion_r1188815621
@sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23300/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23300/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23300",
"html_url": "https://github.com/huggingface/transformers/pull/23300",
"diff_url": "https://github.com/huggingface/transformers/pull/23300.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23300.patch",
"merged_at": 1683828785000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23299
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23299/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23299/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23299/events
|
https://github.com/huggingface/transformers/pull/23299
| 1,706,021,198 |
PR_kwDOCUB6oc5QTata
| 23,299 |
Add gradient_checkpointing parameter to FlaxWhisperEncoder.
|
{
"login": "raghavanone",
"id": 115454562,
"node_id": "U_kgDOBuGyYg",
"avatar_url": "https://avatars.githubusercontent.com/u/115454562?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raghavanone",
"html_url": "https://github.com/raghavanone",
"followers_url": "https://api.github.com/users/raghavanone/followers",
"following_url": "https://api.github.com/users/raghavanone/following{/other_user}",
"gists_url": "https://api.github.com/users/raghavanone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/raghavanone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raghavanone/subscriptions",
"organizations_url": "https://api.github.com/users/raghavanone/orgs",
"repos_url": "https://api.github.com/users/raghavanone/repos",
"events_url": "https://api.github.com/users/raghavanone/events{/privacy}",
"received_events_url": "https://api.github.com/users/raghavanone/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,683 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
Ref https://github.com/huggingface/transformers/pull/23173#discussion_r1188815621
@sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23299/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23299/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23299",
"html_url": "https://github.com/huggingface/transformers/pull/23299",
"diff_url": "https://github.com/huggingface/transformers/pull/23299.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23299.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23298
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23298/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23298/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23298/events
|
https://github.com/huggingface/transformers/issues/23298
| 1,706,017,410 |
I_kwDOCUB6oc5lr8KC
| 23,298 |
ImportError: Datasets needs to be installed if not passing speaker embeddings.
|
{
"login": "pannous",
"id": 516118,
"node_id": "MDQ6VXNlcjUxNjExOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/516118?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pannous",
"html_url": "https://github.com/pannous",
"followers_url": "https://api.github.com/users/pannous/followers",
"following_url": "https://api.github.com/users/pannous/following{/other_user}",
"gists_url": "https://api.github.com/users/pannous/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pannous/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pannous/subscriptions",
"organizations_url": "https://api.github.com/users/pannous/orgs",
"repos_url": "https://api.github.com/users/pannous/repos",
"events_url": "https://api.github.com/users/pannous/events{/privacy}",
"received_events_url": "https://api.github.com/users/pannous/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Thanks @pannous! Indeed, we should show what are the requirements. The error you're getting is due to `datasets` not being installed.",
"This PR should fix it: https://github.com/huggingface/transformers/pull/23301"
] | 1,683 | 1,683 | 1,683 |
NONE
| null |
### System Info
- `transformers` version: 4.29.0
- Platform: macOS-13.3.1-arm64-arm-64bit
- Python version: 3.10.10
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
### Who can help?
@sanchit-gandhi ?
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Maybe not a bug but lacking documentation:
Following the simple exampe https://huggingface.co/docs/transformers/transformers_agents
```
from transformers import OpenAiAgent
from transformers import HfAgent
...
text="A beaver is swimming in the water"
audio=agent.run("Read the following text out loud", text=text)
```
```
==Explanation from the agent==
I will use the following tool: `text_reader` to read the text out loud.
==Code generated by the agent==
audio = text_reader(text)
==Result==
│ /Users/me/.pyenv/versions/3.10.10/lib/python3.10/site-packages/transformers/tools/text_to_speech.py:52 in encode
│ 50 │ │ if speaker_embeddings is None: │
│ 51 │ │ │ if not is_datasets_available(): │
│ ❱ 52 │ │ │ │ raise ImportError("Datasets needs to be installed if not passing speaker │
```
@sanchit-gandhi ?
### Expected behavior
The example should work out of the box, or add information on how to download the required dataset in the documentation and error message.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23298/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23298/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23297
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23297/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23297/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23297/events
|
https://github.com/huggingface/transformers/pull/23297
| 1,706,007,088 |
PR_kwDOCUB6oc5QTXw6
| 23,297 |
Fix broken links in the agent docs
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,683 | 1,683 | 1,683 |
COLLABORATOR
| null |
# What does this PR do?
This PR fixes a bung of broken links in the documentation.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23297/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23297/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23297",
"html_url": "https://github.com/huggingface/transformers/pull/23297",
"diff_url": "https://github.com/huggingface/transformers/pull/23297.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23297.patch",
"merged_at": 1683829580000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23296
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23296/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23296/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23296/events
|
https://github.com/huggingface/transformers/issues/23296
| 1,705,752,392 |
I_kwDOCUB6oc5lq7dI
| 23,296 |
Seq2Seq Trainer Handling for MLFlow Exception
|
{
"login": "njbrake",
"id": 33383515,
"node_id": "MDQ6VXNlcjMzMzgzNTE1",
"avatar_url": "https://avatars.githubusercontent.com/u/33383515?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/njbrake",
"html_url": "https://github.com/njbrake",
"followers_url": "https://api.github.com/users/njbrake/followers",
"following_url": "https://api.github.com/users/njbrake/following{/other_user}",
"gists_url": "https://api.github.com/users/njbrake/gists{/gist_id}",
"starred_url": "https://api.github.com/users/njbrake/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/njbrake/subscriptions",
"organizations_url": "https://api.github.com/users/njbrake/orgs",
"repos_url": "https://api.github.com/users/njbrake/repos",
"events_url": "https://api.github.com/users/njbrake/events{/privacy}",
"received_events_url": "https://api.github.com/users/njbrake/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"We don't maintain the MlFlow integration ourselves, so I can't really help. Maybe try to tag the persons who added it?",
"Thanks for the quick reply! Although my question involves MLFlow, I think the question more broadly is:\r\n\r\nif an integration callback throws an error, how can we disable that integration and continue with training?\r\n\r\nI think that might be a question for the huggingface team and not for the MLFlow integrators?",
"Oh I dind't realize you were asking for that. This is what the [`report_to`](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.report_to) argument is for :-)",
"Ah. Perfect. Thanks so much! "
] | 1,683 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
### System Info
Transformers 4.28.1, torch 1.13.1
When using the Seq2SeqTrainer with the MLFlow integration enabled, if I lose my connection to mlflow after the training has begun (if the server crashes or if there is a network error), MLFlow throws the exception:
```
raise MlflowException(f"API request to {url} failed with exception {e}")
```
I don't know if this is a bug, or if you have a recommended way to continue: I would like the option to have the seq2seqtrainer log the mlflow connection error but then continue training with the mlflow integration disabled.
I imagine that this behavior would apply to any integration, not just mlflow.
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Start training with Seq2Seq Trainer with a connection to MLflow
2. During training, stop the mlflow server
3. Seq2Seq Trainer raises an MLFlow Exception
### Expected behavior
I would expect that there would be an option to allow for mlflow exception to disable the integration but continue training.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23296/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23296/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23295
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23295/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23295/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23295/events
|
https://github.com/huggingface/transformers/pull/23295
| 1,705,738,325 |
PR_kwDOCUB6oc5QSeMO
| 23,295 |
Update conditional logic and -> or in SAM postprocessing
|
{
"login": "hwuebben",
"id": 18739812,
"node_id": "MDQ6VXNlcjE4NzM5ODEy",
"avatar_url": "https://avatars.githubusercontent.com/u/18739812?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hwuebben",
"html_url": "https://github.com/hwuebben",
"followers_url": "https://api.github.com/users/hwuebben/followers",
"following_url": "https://api.github.com/users/hwuebben/following{/other_user}",
"gists_url": "https://api.github.com/users/hwuebben/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hwuebben/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hwuebben/subscriptions",
"organizations_url": "https://api.github.com/users/hwuebben/orgs",
"repos_url": "https://api.github.com/users/hwuebben/repos",
"events_url": "https://api.github.com/users/hwuebben/events{/privacy}",
"received_events_url": "https://api.github.com/users/hwuebben/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@younesbelkada exactly but with the current implementation it could also be a list of arrays and the exception would not be thrown",
"FYI, this PR triggered 3 test failures on the CI.\r\n\r\n(The third one is GPU OOM, which is likely caused by the other 2 failures).\r\n\r\ncc @younesbelkada \r\n\r\n\r\n```bash\r\ntests/models/sam/test_modeling_sam.py::SamModelIntegrationTest::test_inference_mask_generation_one_point_one_bb\r\n(line 235) ValueError: Input boxes must be a list of list of list of floating integers.\r\ntests/models/sam/test_modeling_sam.py::SamModelIntegrationTest::test_inference_mask_generation_one_point_one_bb_zero\r\n(line 235) ValueError: Input boxes must be a list of list of list of floating integers.\r\ntests/models/sam/test_modeling_sam.py::SamModelIntegrationTest::test_inference_mask_generation_two_points_batched\r\n(line 808) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 2.00 GiB (GPU 0; 14.76 GiB total capacity; 11.65 GiB already allocated; 792.75 MiB free; 12.74 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\r\n```",
" ValueError: Input boxes must be a list of list of list of floating integers. I AM Still getting this error, when even run the example notebooks, what do I do? I just have a list of bounding box coordinated(flatenned)",
"Hi @karthikdatta98 \r\nthanks for reporting, in https://github.com/huggingface/notebooks/pull/409 I modified the notebook accordingly to show how to correctly pass bounding boxes"
] | 1,683 | 1,689 | 1,683 |
CONTRIBUTOR
| null |
should be ors here
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23295/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23295/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23295",
"html_url": "https://github.com/huggingface/transformers/pull/23295",
"diff_url": "https://github.com/huggingface/transformers/pull/23295.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23295.patch",
"merged_at": 1683906460000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23294
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23294/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23294/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23294/events
|
https://github.com/huggingface/transformers/issues/23294
| 1,705,714,821 |
I_kwDOCUB6oc5lqySF
| 23,294 |
Getting ValueError: model.shared.weight doesn't have any device set in running a M2M100's-12B model on colab while using with accelerate
|
{
"login": "abhishektcs1",
"id": 121281126,
"node_id": "U_kgDOBzqaZg",
"avatar_url": "https://avatars.githubusercontent.com/u/121281126?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishektcs1",
"html_url": "https://github.com/abhishektcs1",
"followers_url": "https://api.github.com/users/abhishektcs1/followers",
"following_url": "https://api.github.com/users/abhishektcs1/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishektcs1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishektcs1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishektcs1/subscriptions",
"organizations_url": "https://api.github.com/users/abhishektcs1/orgs",
"repos_url": "https://api.github.com/users/abhishektcs1/repos",
"events_url": "https://api.github.com/users/abhishektcs1/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishektcs1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @abhishektcs1, thanks for reporting this issue! \r\n\r\nCould you provide information about the running environment: run `transformers-cli env` in the terminal and copy-paste the output? ",
"> Hi @abhishektcs1, thanks for reporting this issue!\r\n> \r\n> Could you provide information about the running environment: run `transformers-cli env` in the terminal and copy-paste the output?\r\n\r\nHi @amyeroberts , I am also facing the same error. Please find below the output of 'transformer-cli' \r\n------------------------------------------------------------------------------------------------------------------------\r\n2023-05-13 05:36:18.558293: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\r\nWARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/transformers/commands/env.py:63: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.\r\nInstructions for updating:\r\nUse `tf.config.list_physical_devices('GPU')` instead.\r\n2023-05-13 05:36:22.918424: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:47] Overriding orig_value setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.\r\n\r\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two last points.\r\n\r\n- `transformers` version: 4.29.1\r\n- Platform: Linux-5.15.107+-x86_64-with-glibc2.31\r\n- Python version: 3.10.11\r\n- Huggingface_hub version: 0.14.1\r\n- Safetensors version: not installed\r\n- PyTorch version (GPU?): 2.0.0+cu118 (True)\r\n- Tensorflow version (GPU?): 2.12.0 (True)\r\n- Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu)\r\n- Jax version: 0.4.8\r\n- JaxLib version: 0.4.7\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n\r\nPlease find the attachment having nvidia-smi output of google colab pro, I am using\r\n<img width=\"932\" alt=\"nvidia-smi\" src=\"https://github.com/huggingface/transformers/assets/17768401/5c469e3d-1c60-4d9a-8481-19845504a3f6\">\r\n ",
"@abhishektcs1 @sanyoggupta Could either of you also share a full traceback of the error encountered (the entire error message, from the first lines), preferably as a copy-paste of the text rather than a screenshot please?",
"> @abhishektcs1 @sanyoggupta Could either of you also share a full traceback of the error encountered (the entire error message, from the first lines), preferably as a copy-paste of the text rather than a screenshot please?\r\n\r\nHey, I am getting a similar error when I try out my code\r\nThis is the Traceback:\r\n```\r\nExplicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.\r\nTraceback (most recent call last):\r\n File \"/home/ksuresh6/DataChat_Project/model.py\", line 20, in <module>\r\n model = load_checkpoint_and_dispatch(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/data/hulab/ksuresh6/anaconda3/envs/datachat_env/lib/python3.11/site-packages/accelerate/big_modeling.py\", line 479, in load_checkpoint_and_dispatch\r\n load_checkpoint_in_model(\r\n File \"/data/hulab/ksuresh6/anaconda3/envs/datachat_env/lib/python3.11/site-packages/accelerate/utils/modeling.py\", line 982, in load_checkpoint_in_model\r\n raise ValueError(f\"{param_name} doesn't have any device set.\")\r\nValueError: decoder.transformer.h.7.attn.causal_mask doesn't have any device set.\r\n(datachat_env) ksuresh6@AMD4RTX3090GPU14:~/DataChat_Project$ python3 model.py \r\nExplicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.\r\nTraceback (most recent call last):\r\n File \"/home/ksuresh6/DataChat_Project/model.py\", line 20, in <module>\r\n model = load_checkpoint_and_dispatch(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/data/hulab/ksuresh6/anaconda3/envs/datachat_env/lib/python3.11/site-packages/accelerate/big_modeling.py\", line 479, in load_checkpoint_and_dispatch\r\n load_checkpoint_in_model(\r\n File \"/data/hulab/ksuresh6/anaconda3/envs/datachat_env/lib/python3.11/site-packages/accelerate/utils/modeling.py\", line 982, in load_checkpoint_in_model\r\n raise ValueError(f\"{param_name} doesn't have any device set.\")\r\nValueError: decoder.transformer.h.7.attn.causal_mask doesn't have any device set.\r\n```\r\n\r\nThis is the code I am trying out:\r\n\r\n```\r\nfrom transformers import AutoModelForSeq2SeqLM, AutoTokenizer\r\nimport torch\r\nfrom transformers import AutoConfig\r\nfrom accelerate import init_empty_weights\r\nfrom accelerate import load_checkpoint_and_dispatch\r\ncheckpoint = \"Salesforce/instructcodet5p-16b\"\r\ndevice = \"cuda\" # for GPU usage or \"cpu\" for CPU usage\r\n\r\n\r\n\r\n\r\nmodel_path ='/home/ksuresh6/.cache/huggingface/hub/models--Salesforce--instructcodet5p-16b/snapshots/b5aaae8f54e8f13897e395fbc4c22567df0399ef'\r\ntokenizer = AutoTokenizer.from_pretrained(model_path)\r\nconfig = AutoConfig.from_pretrained(checkpoint,torch_dtype=torch.float16,low_cpu_mem_usage=True,trust_remote_code=True)\r\nwith init_empty_weights():\r\n model = AutoModelForSeq2SeqLM.from_config(config, trust_remote_code=True,torch_dtype=torch.float16)\r\nmodel.tie_weights()\r\n\r\n\r\nmodel = load_checkpoint_and_dispatch(\r\n model, model_path, device_map=\"auto\"\r\n)\r\n\r\ninputs = tokenizer.encode(\"def print_hello():\", return_tensors=\"pt\").to(device)\r\noutputs = model.generate(inputs, max_length=12)\r\nprint(tokenizer.decode(outputs[0], skip_special_tokens=True))\r\n```\r\n\r\nThis is the output of `transformers-cli env`: \r\n\r\n```\r\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two last points.\r\n\r\n- `transformers` version: 4.26.1\r\n- Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35\r\n- Python version: 3.10.6\r\n- Huggingface_hub version: 0.12.1\r\n- PyTorch version (GPU?): 1.13.1+cu117 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA) \r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: Yes\r\n- Using distributed or parallel set-up in script?: Yes\r\n```\r\n\r\nAny help is appreciated! Thanks in advance!",
"> @abhishektcs1 @sanyoggupta Could either of you also share a full traceback of the error encountered (the entire error message, from the first lines), preferably as a copy-paste of the text rather than a screenshot please?\r\nHii,\r\ni am facing the same issue,\r\nthis is what i get after executing (! transformers-cli env)\r\n\r\n\r\nplease help me out with this problem.\r\nThank You!",
"@younesbelkada could this be the same bug you fixed on NLLB here? I see the no_split_module_class is also the attention layer.",
"Hmm this sounds more like you are using the infer auto device map in an inappropriate way indeed. You should put `\"M2M100EncoderLayer\"` and `\"M2M100DecoderLayer\"` inside `_no_split_modules`. Could you try again with these new values? Also can you share us a handy reproducible snippet? 🙏 ",
"Thank You i got it.\r\n@sgugger you have posted a great documentation on hugging face on \"how to run these large model on our device\".\r\n\r\nhttps://huggingface.co/blog/accelerate-large-models",
"> Hmm this sounds more like you are using the infer auto device map in an inappropriate way indeed. You should put `\"M2M100EncoderLayer\"` and `\"M2M100DecoderLayer\"` inside `_no_split_modules`. Could you try again with these new values? Also can you share us a handy reproducible snippet? 🙏\r\n\r\n\r\nplease help me out what values should i pass in no_split_modules\r\nThank You!",
"\r\nthese are the model layers.\r\n",
"Hi @anujsahani01 \r\nCan you try to put `GPTBigCodeBlock` in no split modules?",
"> Hi @anujsahani01 Can you try to put `GPTBigCodeBlock` in no split modules?\r\n\r\nYes it worked.\r\nThank You!\r\n",
"> Hi @anujsahani01 Can you try to put `GPTBigCodeBlock` in no split modules?\r\n\r\nHey,\r\nwas having one more doubt if please me with this.\r\nI am finetuning hugging face “HuggingFaceH4/starchat-alpha” model for making a data science text to code generating bot.\r\nThis is the format of my dataset:\r\ntrain: Dataset({\r\nfeatures: [‘input_ids’, ‘labels’],\r\nnum_rows: 5012\r\n})\r\ntest: Dataset({\r\nfeatures: [‘input_ids’, ‘labels’],\r\nnum_rows: 1325\r\n})\r\n})\r\nand the structure of the dataset looks somewhat like this, which was explained in starcoder documentation,\r\n<|system|>\r\nBelow is a dialogue between a human and an ANUJ_AI\r\n<|end|>\r\n<|user|>\r\nMinimum count of ind… so on\r\n<|end|>\r\n<|assistant|>\r\ndef possible ( x , S , N ) : …so on\r\n<|end|>\r\n\r\nI am loading the model on my colab in 8 bit format using :hugs:transformer BitsAndBytesConfig for saving memory, then loaded the model using a device map which was made using :hugs: transformers AutoConfig and the acclerate which divided my model amoung ‘gpu’, ‘cpu’ RAM and my ‘disk’.\r\n\r\nOnce the model and its checkpoints were downloaded successfully then i used transformers.Trainer to train the model on my custom dataset.\r\nmy using the below code:\r\n\r\n\r\n\r\n\r\nbut i am always getting this error :\r\nCannot copy out of meta tensor ; no data !\r\n\r\n\r\nYour inputs will be highly appreciated.\r\nThank You!",
"Hi @anujshani01\r\nThanks! Could you explain a bit more in details how you train the 8bit model? Are you sure you are using adapters leveraging PEFT library? \r\nMaybe if you can share the full snippet I can help you more on that! 💪 ",
"> Hi @anujshani01 Thanks! Could you explain a bit more in details how you train the 8bit model? Are you sure you are using adapters leveraging PEFT library? Maybe if you can share the full snippet I can help you more on that! 💪\r\n\r\ni have updates the colab notebook.\r\nhttps://drive.google.com/file/d/1-ccrx1Q5tkLUYtZBGi5lNZGjPMyr_X9U/view?usp=sharing\r\n\r\ni am not using 8bit model now.\r\ni am using 🤗tool \" accelerate \" to initializing the model then using load_checkpoint_and_dispatch i am loading the model weights and all.\r\nBut its giving me this error:\r\nValueError: offload is not a folder containing a .index.json file.\r\n\r\ni am not able to understant what exactly the error is.\r\nplease have a look at the snip which show the offload folder and error\r\n\r\n\r\nPlease help we out with this error it would be a great help.\r\nYour inputs will be highly appreciated.\r\nThank You!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,683 | 1,688 | 1,688 |
NONE
| null |
### System Info
I am getting following error while using accelerate for M2M100 on google colab pro. Following is the code snippet:
import torch
device=torch.device('cuda' if torch.cuda.is_available() else 'cpu')
from transformers import AutoConfig, M2M100ForConditionalGeneration, M2M100Tokenizer, AutoModel
from accelerate import infer_auto_device_map, init_empty_weights
from transformers import AutoModel, M2M100Config
config = M2M100Config.from_pretrained("facebook/m2m100-12B-last-ckpt")
with init_empty_weights():
model = AutoModel.from_config(config)
device_map = infer_auto_device_map(model, no_split_module_classes=["M2M100Attention"])
checkpoint = "facebook/m2m100-12B-last-ckpt"
device_map["shared"] = "cpu"
device_map["encoder"] = "cpu"
device_map["decoder.embed_tokens"] = "cpu"
device_map["decoder.embed_positions"] = "cpu"
device_map["decoder.layers.0"] = "cpu"
device_map["decoder.layers.1"] = "cpu"
device_map["decoder.layers.2"] = "cpu"
device_map["decoder.layers.3"] = "cpu"
model = M2M100ForConditionalGeneration.from_pretrained(checkpoint, device_map=device_map, offload_folder="offload", offload_state_dict = True)
Following are the env specs:
Model Link: https://huggingface.co/facebook/m2m100-12B-last-ckpt
Python Version: 3.10
GPU: A100
GPU: 40GB
RAM: 83.5 GB
CUDA version: 12.0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
import torch
device=torch.device('cuda' if torch.cuda.is_available() else 'cpu')
from transformers import AutoConfig, M2M100ForConditionalGeneration, M2M100Tokenizer, AutoModel
from accelerate import infer_auto_device_map, init_empty_weights
from transformers import AutoModel, M2M100Config
config = M2M100Config.from_pretrained("facebook/m2m100-12B-last-ckpt")
with init_empty_weights():
model = AutoModel.from_config(config)
device_map = infer_auto_device_map(model, no_split_module_classes=["M2M100Attention"])
checkpoint = "facebook/m2m100-12B-last-ckpt"
device_map["shared"] = "cpu"
device_map["encoder"] = "cpu"
device_map["decoder.embed_tokens"] = "cpu"
device_map["decoder.embed_positions"] = "cpu"
device_map["decoder.layers.0"] = "cpu"
device_map["decoder.layers.1"] = "cpu"
device_map["decoder.layers.2"] = "cpu"
device_map["decoder.layers.3"] = "cpu"
model = M2M100ForConditionalGeneration.from_pretrained(checkpoint, device_map=device_map, offload_folder="offload", offload_state_dict = True)
### Expected behavior
Expecting the model to load properly and after the following code is to be used for translation:
hi_text='''La vie est comme une boîte de chocolat.'''
tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100-12B-last-ckpt")
encoded_hi = tokenizer(hi_text, return_tensors="pt").to('cuda')
generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.get_lang_id("en"))

print(tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[0])
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23294/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23294/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23293
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23293/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23293/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23293/events
|
https://github.com/huggingface/transformers/pull/23293
| 1,705,649,969 |
PR_kwDOCUB6oc5QSLUh
| 23,293 |
unpin tf prob
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,683 | 1,683 | 1,683 |
COLLABORATOR
| null |
# What does this PR do?
unpin tf prob
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23293/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23293/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23293",
"html_url": "https://github.com/huggingface/transformers/pull/23293",
"diff_url": "https://github.com/huggingface/transformers/pull/23293.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23293.patch",
"merged_at": 1683833289000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23292
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23292/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23292/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23292/events
|
https://github.com/huggingface/transformers/pull/23292
| 1,705,557,265 |
PR_kwDOCUB6oc5QR3QI
| 23,292 |
Update custom_tools.mdx: fix link
|
{
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,683 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
Wrong parantheses
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23292/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23292/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23292",
"html_url": "https://github.com/huggingface/transformers/pull/23292",
"diff_url": "https://github.com/huggingface/transformers/pull/23292.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23292.patch",
"merged_at": 1683809404000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23291
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23291/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23291/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23291/events
|
https://github.com/huggingface/transformers/pull/23291
| 1,705,522,809 |
PR_kwDOCUB6oc5QRv2d
| 23,291 |
add GPTJ/bloom/llama/opt into model list and enhance the jit support
|
{
"login": "sywangyi",
"id": 36058628,
"node_id": "MDQ6VXNlcjM2MDU4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sywangyi",
"html_url": "https://github.com/sywangyi",
"followers_url": "https://api.github.com/users/sywangyi/followers",
"following_url": "https://api.github.com/users/sywangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions",
"organizations_url": "https://api.github.com/users/sywangyi/orgs",
"repos_url": "https://api.github.com/users/sywangyi/repos",
"events_url": "https://api.github.com/users/sywangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sywangyi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@amyeroberts please help review",
"@jiqing-feng @yao-matrix",
"_The documentation is not available anymore as the PR was closed or merged._",
"@amyeroberts for context, this PR builds upon https://github.com/huggingface/transformers/pull/22265 -- an example of how to JIT trace text generation",
"> Thanks for iterating!\r\n\r\nThanks @amyeroberts could you help to merge the PR? I check the failed case in ci. it has nothing to do with my code. ",
"@sywangyi Looking at the CI output, it seems that the examples test run failed before the generation tests were run. Could you rebase from main to include any recent updates? I believe the accelerate errors should now be resolved. "
] | 1,683 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
extend the text generation to more model
- generate: @gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23291/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23291/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23291",
"html_url": "https://github.com/huggingface/transformers/pull/23291",
"diff_url": "https://github.com/huggingface/transformers/pull/23291.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23291.patch",
"merged_at": 1684922276000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23290
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23290/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23290/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23290/events
|
https://github.com/huggingface/transformers/issues/23290
| 1,705,519,370 |
I_kwDOCUB6oc5lqCkK
| 23,290 |
Problem with Transformers Agents: audio generation
|
{
"login": "piust",
"id": 42667376,
"node_id": "MDQ6VXNlcjQyNjY3Mzc2",
"avatar_url": "https://avatars.githubusercontent.com/u/42667376?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/piust",
"html_url": "https://github.com/piust",
"followers_url": "https://api.github.com/users/piust/followers",
"following_url": "https://api.github.com/users/piust/following{/other_user}",
"gists_url": "https://api.github.com/users/piust/gists{/gist_id}",
"starred_url": "https://api.github.com/users/piust/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/piust/subscriptions",
"organizations_url": "https://api.github.com/users/piust/orgs",
"repos_url": "https://api.github.com/users/piust/repos",
"events_url": "https://api.github.com/users/piust/events{/privacy}",
"received_events_url": "https://api.github.com/users/piust/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"How does it go if you decompose the process in several steps ? \r\n\r\nfor example : \r\n```python\r\nroom = agent.run(\"Generate an image of a 19th century ballroom\")\r\ndescription = agent.run(\"describe the contents of the `image`\", image=room)\r\naudio = agent.run(\"Read out load the content of `description`\", description=description)\r\nplay_audio(audio)\r\n```",
"It seems to work now, even if the model said \"A red carpet\".\r\n\r\nThe image was:\r\n\r\n\r\nand the output:\r\n\r\n==Explanation from the agent==\r\nI will use the following tool: `image_generator` to generate an image according to the prompt.\r\n\r\n\r\n==Code generated by the agent==\r\nimage = image_generator(prompt=\"19th century ballroom\")\r\n\r\n\r\n==Result==\r\nDownloading (…)ain/text_to_image.py: 100%\r\n1.76k/1.76k [00:00<00:00, 149kB/s]\r\nA new version of the following files was downloaded from https://huggingface.co/space/huggingface-tools/text-to-image:\r\n- text_to_image.py\r\n. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.\r\nDownloading (…)ain/model_index.json: 100%\r\n541/541 [00:00<00:00, 38.8kB/s]\r\nFetching 15 files: 100%\r\n15/15 [00:22<00:00, 1.59s/it]\r\nDownloading (…)_checker/config.json: 100%\r\n4.72k/4.72k [00:00<00:00, 79.0kB/s]\r\nDownloading (…)rocessor_config.json: 100%\r\n342/342 [00:00<00:00, 3.52kB/s]\r\nDownloading (…)cheduler_config.json: 100%\r\n308/308 [00:00<00:00, 3.79kB/s]\r\nDownloading (…)cial_tokens_map.json: 100%\r\n472/472 [00:00<00:00, 4.20kB/s]\r\nDownloading (…)tokenizer/merges.txt: 100%\r\n525k/525k [00:00<00:00, 3.01MB/s]\r\nDownloading (…)_encoder/config.json: 100%\r\n617/617 [00:00<00:00, 4.73kB/s]\r\nDownloading pytorch_model.bin: 100%\r\n1.22G/1.22G [00:12<00:00, 126MB/s]\r\nDownloading pytorch_model.bin: 100%\r\n492M/492M [00:06<00:00, 80.2MB/s]\r\nDownloading (…)okenizer_config.json: 100%\r\n806/806 [00:00<00:00, 9.97kB/s]\r\nDownloading (…)e6a/unet/config.json: 100%\r\n743/743 [00:00<00:00, 9.54kB/s]\r\nDownloading (…)8e6a/vae/config.json: 100%\r\n547/547 [00:00<00:00, 6.66kB/s]\r\nDownloading (…)tokenizer/vocab.json: 100%\r\n1.06M/1.06M [00:00<00:00, 8.59MB/s]\r\nDownloading (…)on_pytorch_model.bin: 100%\r\n3.44G/3.44G [00:21<00:00, 231MB/s]\r\nDownloading (…)on_pytorch_model.bin: 100%\r\n335M/335M [00:03<00:00, 55.6MB/s]\r\n`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config[\"id2label\"]` will be overriden.\r\n100%\r\n25/25 [00:01<00:00, 18.37it/s]\r\n==Explanation from the agent==\r\nI will use the following tool: `image_captioner` to generate a description of the image.\r\n\r\n\r\n==Code generated by the agent==\r\ndescription = image_captioner(image)\r\n\r\n\r\n==Result==\r\nDownloading (…)rocessor_config.json: 100%\r\n287/287 [00:00<00:00, 25.8kB/s]\r\nDownloading (…)okenizer_config.json: 100%\r\n438/438 [00:00<00:00, 34.5kB/s]\r\nDownloading (…)solve/main/vocab.txt: 100%\r\n232k/232k [00:00<00:00, 5.23MB/s]\r\nDownloading (…)/main/tokenizer.json: 100%\r\n711k/711k [00:00<00:00, 14.4MB/s]\r\nDownloading (…)cial_tokens_map.json: 100%\r\n125/125 [00:00<00:00, 10.3kB/s]\r\nDownloading (…)lve/main/config.json: 100%\r\n4.56k/4.56k [00:00<00:00, 324kB/s]\r\nDownloading pytorch_model.bin: 100%\r\n990M/990M [00:04<00:00, 233MB/s]\r\n/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1346: UserWarning: Using `max_length`'s default (20) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation.\r\n warnings.warn(\r\n==Explanation from the agent==\r\nI will use the following tool: `text_reader` to read out loud the content of the variable `description`.\r\n\r\n\r\n==Code generated by the agent==\r\naudio_description = text_reader(description)\r\n\r\n\r\n==Result==\r\nDownloading (…)rocessor_config.json: 100%\r\n433/433 [00:00<00:00, 35.6kB/s]\r\nDownloading spm_char.model: 100%\r\n238k/238k [00:00<00:00, 18.8MB/s]\r\nDownloading (…)in/added_tokens.json: 100%\r\n40.0/40.0 [00:00<00:00, 2.59kB/s]\r\nDownloading (…)cial_tokens_map.json: 100%\r\n234/234 [00:00<00:00, 21.5kB/s]\r\nDownloading (…)okenizer_config.json: 100%\r\n232/232 [00:00<00:00, 17.4kB/s]\r\nDownloading (…)lve/main/config.json: 100%\r\n2.06k/2.06k [00:00<00:00, 164kB/s]\r\nDownloading pytorch_model.bin: 100%\r\n585M/585M [00:05<00:00, 103MB/s]\r\nDownloading (…)lve/main/config.json: 100%\r\n636/636 [00:00<00:00, 52.3kB/s]\r\nDownloading pytorch_model.bin: 100%\r\n50.7M/50.7M [00:00<00:00, 96.5MB/s]\r\nDownloading builder script: 100%\r\n1.36k/1.36k [00:00<00:00, 94.6kB/s]\r\nDownloading readme: 100%\r\n1.01k/1.01k [00:00<00:00, 68.2kB/s]\r\nDownloading and preparing dataset cmu-arctic-xvectors/default to /root/.cache/huggingface/datasets/Matthijs___cmu-arctic-xvectors/default/0.0.1/a62fea1f9415e240301ea0042ffad2a3aadf4d1caa7f9a8d9512d631723e781f...\r\nDownloading data: 100%\r\n17.9M/17.9M [00:00<00:00, 86.0MB/s]\r\nDataset cmu-arctic-xvectors downloaded and prepared to /root/.cache/huggingface/datasets/Matthijs___cmu-arctic-xvectors/default/0.0.1/a62fea1f9415e240301ea0042ffad2a3aadf4d1caa7f9a8d9512d631723e781f. Subsequent calls will reuse this data.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,683 | 1,687 | 1,687 |
NONE
| null |
### System Info
Hello. I'm exploring Transformers Agents capabilities and wanted to generate an image and ask the agent to say what it contains.
I created the image with the command:
room = agent.run("Generate an image of a 19th century ballroom")
that works fine, but when I ask to describe the image with:
audio = agent.run("Read out loud the contents of the image image", image=room)
play_audio(audio)
it answers with the attached error.
[message.txt](https://github.com/huggingface/transformers/files/11450975/message.txt)
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
room = agent.run("Generate an image of a 19th century ballroom")
audio = agent.run("Read out loud the contents of the image image", image=room)
play_audio(audio)
### Expected behavior
It should create an audio file with the description of the image.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23290/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23290/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23289
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23289/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23289/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23289/events
|
https://github.com/huggingface/transformers/pull/23289
| 1,705,491,318 |
PR_kwDOCUB6oc5QRo6W
| 23,289 |
Update transformers_agents.mdx
|
{
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,683 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
Make `huggingface-tools` to [`huggingface-tools`](https://huggingface.co/huggingface-tools)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23289/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23289/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23289",
"html_url": "https://github.com/huggingface/transformers/pull/23289",
"diff_url": "https://github.com/huggingface/transformers/pull/23289.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23289.patch",
"merged_at": 1683809642000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23288
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23288/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23288/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23288/events
|
https://github.com/huggingface/transformers/pull/23288
| 1,705,431,729 |
PR_kwDOCUB6oc5QRcEA
| 23,288 |
Temporarily increase tol for PT-FLAX whisper tests
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Updated issue #23258 to reference this PR too",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,683 | 1,683 | 1,683 |
COLLABORATOR
| null |
# What does this PR do?
Flax whisper equivalence tests have also started to fail in a flaky manner with small increase in the difference between the PT and FLAX model e.g. for [this run](https://app.circleci.com/pipelines/github/huggingface/transformers/64230/workflows/e2d42ca4-f367-4a85-9054-a0ea99e49849/jobs/794534).
Flax equivalent of: #23257
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23288/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23288/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23288",
"html_url": "https://github.com/huggingface/transformers/pull/23288",
"diff_url": "https://github.com/huggingface/transformers/pull/23288.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23288.patch",
"merged_at": 1683801799000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23287
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23287/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23287/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23287/events
|
https://github.com/huggingface/transformers/pull/23287
| 1,705,391,677 |
PR_kwDOCUB6oc5QRTg2
| 23,287 |
Added missing " in CHAT_PROMPT_TEMPLATE
|
{
"login": "galatolofederico",
"id": 15450580,
"node_id": "MDQ6VXNlcjE1NDUwNTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/15450580?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/galatolofederico",
"html_url": "https://github.com/galatolofederico",
"followers_url": "https://api.github.com/users/galatolofederico/followers",
"following_url": "https://api.github.com/users/galatolofederico/following{/other_user}",
"gists_url": "https://api.github.com/users/galatolofederico/gists{/gist_id}",
"starred_url": "https://api.github.com/users/galatolofederico/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/galatolofederico/subscriptions",
"organizations_url": "https://api.github.com/users/galatolofederico/orgs",
"repos_url": "https://api.github.com/users/galatolofederico/repos",
"events_url": "https://api.github.com/users/galatolofederico/events{/privacy}",
"received_events_url": "https://api.github.com/users/galatolofederico/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Merging as failing test is independent of this PR and has been resolved on main. "
] | 1,683 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
# What does this PR do?
It adds a missing `"` in `CHAT_PROMPT_TEMPLATE`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23287/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23287/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23287",
"html_url": "https://github.com/huggingface/transformers/pull/23287",
"diff_url": "https://github.com/huggingface/transformers/pull/23287.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23287.patch",
"merged_at": 1683801933000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23286
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23286/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23286/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23286/events
|
https://github.com/huggingface/transformers/issues/23286
| 1,705,368,635 |
I_kwDOCUB6oc5lpdw7
| 23,286 |
`offset_alibi` is not used
|
{
"login": "JaheimLee",
"id": 18062264,
"node_id": "MDQ6VXNlcjE4MDYyMjY0",
"avatar_url": "https://avatars.githubusercontent.com/u/18062264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JaheimLee",
"html_url": "https://github.com/JaheimLee",
"followers_url": "https://api.github.com/users/JaheimLee/followers",
"following_url": "https://api.github.com/users/JaheimLee/following{/other_user}",
"gists_url": "https://api.github.com/users/JaheimLee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JaheimLee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JaheimLee/subscriptions",
"organizations_url": "https://api.github.com/users/JaheimLee/orgs",
"repos_url": "https://api.github.com/users/JaheimLee/repos",
"events_url": "https://api.github.com/users/JaheimLee/events{/privacy}",
"received_events_url": "https://api.github.com/users/JaheimLee/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @JaheimLee, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/), as we try to reserve the github issues for feature requests and bug reports. Another good place to ask would be opening a discussion [on the model page on the hub](https://huggingface.co/bigscience/bloom-7b1/discussions), as it directly relates to the configuration file there. ",
"> Hi @JaheimLee, thanks for raising an issue!\r\n> \r\n> This is a question best placed in our [forums](https://discuss.huggingface.co/), as we try to reserve the github issues for feature requests and bug reports. Another good place to ask would be opening a discussion [on the model page on the hub](https://huggingface.co/bigscience/bloom-7b1/discussions), as it directly relates to the configuration file there.\r\n\r\nOh, sorry. I created an issue on the hub. I close this one now."
] | 1,683 | 1,683 | 1,683 |
NONE
| null |
What's the meaning of `offset_alibi`? It's not used in Bloom's source codes. Does it make sense?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23286/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23286/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23285
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23285/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23285/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23285/events
|
https://github.com/huggingface/transformers/issues/23285
| 1,705,342,567 |
I_kwDOCUB6oc5lpXZn
| 23,285 |
AutoModelFromCausalLLM of Bloom not releasing GPU memory after each inference batch
|
{
"login": "chuckhope",
"id": 27415219,
"node_id": "MDQ6VXNlcjI3NDE1MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/27415219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chuckhope",
"html_url": "https://github.com/chuckhope",
"followers_url": "https://api.github.com/users/chuckhope/followers",
"following_url": "https://api.github.com/users/chuckhope/following{/other_user}",
"gists_url": "https://api.github.com/users/chuckhope/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chuckhope/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chuckhope/subscriptions",
"organizations_url": "https://api.github.com/users/chuckhope/orgs",
"repos_url": "https://api.github.com/users/chuckhope/repos",
"events_url": "https://api.github.com/users/chuckhope/events{/privacy}",
"received_events_url": "https://api.github.com/users/chuckhope/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @chuckhope, thanks for raising this issue. \r\n\r\nSo that we can best help, could you provide the following information: \r\n* a minimal code snippet to reproduce the error\r\n* Information about the hardware - are you running on a single GPU? \r\n* Information about the model - are you using a single checkpoint? If so, which one? If not, is this observed for all checkpoints? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,683 | 1,687 | 1,687 |
NONE
| null |
### System Info
Hi there, I have set torch.no_grad() and torch.cuda.empty_cache(), but the GPU still encounters out-of-memory (OOM) errors after a few inferences. My torch version is 1.13.1, deepspeed version is 0.9, and transformer version is 4.28.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I have set torch.no_grad() and torch.cuda.empty_cache() for AutoModelFromCausalLLM
### Expected behavior
auto release the memory usage
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23285/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23285/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23284
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23284/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23284/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23284/events
|
https://github.com/huggingface/transformers/pull/23284
| 1,705,133,150 |
PR_kwDOCUB6oc5QQb-Y
| 23,284 |
[WIP]/[DRAFT] Add ImageBind model
|
{
"login": "shehanmunasinghe",
"id": 5057255,
"node_id": "MDQ6VXNlcjUwNTcyNTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5057255?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shehanmunasinghe",
"html_url": "https://github.com/shehanmunasinghe",
"followers_url": "https://api.github.com/users/shehanmunasinghe/followers",
"following_url": "https://api.github.com/users/shehanmunasinghe/following{/other_user}",
"gists_url": "https://api.github.com/users/shehanmunasinghe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shehanmunasinghe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shehanmunasinghe/subscriptions",
"organizations_url": "https://api.github.com/users/shehanmunasinghe/orgs",
"repos_url": "https://api.github.com/users/shehanmunasinghe/repos",
"events_url": "https://api.github.com/users/shehanmunasinghe/events{/privacy}",
"received_events_url": "https://api.github.com/users/shehanmunasinghe/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@shehanmunasinghe Awesome work with all these model PRs 🤗 let me know when it's ready for review! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,683 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
# What does this PR do?
Adding ImageBind model (DRAFT/ WORK IN PROGRESS - not ready for review yet)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
https://github.com/huggingface/transformers/issues/23240
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. https://github.com/huggingface/transformers/issues/23240
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts @ArthurZucker
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23284/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23284/timeline
| null | true |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23284",
"html_url": "https://github.com/huggingface/transformers/pull/23284",
"diff_url": "https://github.com/huggingface/transformers/pull/23284.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23284.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23282
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23282/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23282/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23282/events
|
https://github.com/huggingface/transformers/issues/23282
| 1,704,951,330 |
I_kwDOCUB6oc5ln34i
| 23,282 |
T5-Flan Resuming Int-8 / LoRA / Deepspeed Checkpoint
|
{
"login": "afogarty85",
"id": 49048309,
"node_id": "MDQ6VXNlcjQ5MDQ4MzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/49048309?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/afogarty85",
"html_url": "https://github.com/afogarty85",
"followers_url": "https://api.github.com/users/afogarty85/followers",
"following_url": "https://api.github.com/users/afogarty85/following{/other_user}",
"gists_url": "https://api.github.com/users/afogarty85/gists{/gist_id}",
"starred_url": "https://api.github.com/users/afogarty85/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/afogarty85/subscriptions",
"organizations_url": "https://api.github.com/users/afogarty85/orgs",
"repos_url": "https://api.github.com/users/afogarty85/repos",
"events_url": "https://api.github.com/users/afogarty85/events{/privacy}",
"received_events_url": "https://api.github.com/users/afogarty85/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Totally didnt try the trainer, but it works!\r\n`trainer.train('checkpoint-xxxx')`"
] | 1,683 | 1,683 | 1,683 |
NONE
| null |
### System Info
transformers 4.29
accelerate 0.19
peft 0.3
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
My training ended unexpectedly and I want to resume my T5-Flan training from a checkpoint. Inside my checkpoint I have:
```
global_step25000
latest
pytorch_model.bin
rng_state.pth
trainer_state.json
training_args.bin
zero_to_fp32.py
```
I am unable to load the checkpoint in the following ways:
```
# try 1
model = AutoModelForSeq2SeqLM.from_pretrained("/checkpoint-25000",
load_in_8bit=True,
device_map='auto',
)
# ValueError: weight is on the meta device, we need a `value` to put in on 0.
# try 2
model = AutoModelForSeq2SeqLM.from_pretrained("/checkpoint-25000",
)
# ValueError: weight is on the meta device, we need a `value` to put in on 0.
# try 3
config = AutoConfig.from_pretrained("google/flan-t5-large")
with init_empty_weights():
model = AutoModelForSeq2SeqLM.from_config(config)
model.tie_weights()
device_map = infer_auto_device_map(model)
model = load_checkpoint_and_dispatch(model, "checkpoint-25000/pytorch_model.bin", device_map=device_map)
# AttributeError: 'T5ForConditionalGeneration' object has no attribute 'model'
```
### Expected behavior
The checkpoint to attach so I could resume training.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23282/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23282/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23280
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23280/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23280/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23280/events
|
https://github.com/huggingface/transformers/issues/23280
| 1,704,899,616 |
I_kwDOCUB6oc5lnrQg
| 23,280 |
[bloomz] attn_mask return bool, but Deepspeed softmax input needs int
|
{
"login": "shenzhuo",
"id": 9036343,
"node_id": "MDQ6VXNlcjkwMzYzNDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9036343?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shenzhuo",
"html_url": "https://github.com/shenzhuo",
"followers_url": "https://api.github.com/users/shenzhuo/followers",
"following_url": "https://api.github.com/users/shenzhuo/following{/other_user}",
"gists_url": "https://api.github.com/users/shenzhuo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shenzhuo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shenzhuo/subscriptions",
"organizations_url": "https://api.github.com/users/shenzhuo/orgs",
"repos_url": "https://api.github.com/users/shenzhuo/repos",
"events_url": "https://api.github.com/users/shenzhuo/events{/privacy}",
"received_events_url": "https://api.github.com/users/shenzhuo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @shenzhuo, \r\n\r\nThe linked PR was closed and the commits not added - the PR introducing the change was #18344. From the PR description, it seems converting `causal_mask` to `bool` was intentional and not a side-effect. I'll let @thomasw21 explain why this change was made :) ",
"Yeah so there's no reason to pass `attention_mask` to be int64 since basically it stored boolean values. I think the reason why this is breaking is because of `deepspeed`, the forward function is overriden by custom operations on `deepspeed` side: https://github.com/microsoft/DeepSpeed/blame/194053bd58947ac6a45363ba780c9dfb127d3064/deepspeed/ops/transformer/inference/ds_attention.py#L168\r\n\r\nI would suggest to fix this in DS side, ie probable changing `(1 - input_mask).to(target_dtype) * minus_inf)` to something like `(~input_mask).to(target_type) * minus_inf`",
"> Yeah so there's no reason to pass `attention_mask` to be int64 since basically it stored boolean values. I think the reason why this is breaking is because of `deepspeed`, the forward function is overriden by custom operations on `deepspeed` side: https://github.com/microsoft/DeepSpeed/blame/194053bd58947ac6a45363ba780c9dfb127d3064/deepspeed/ops/transformer/inference/ds_attention.py#L168\r\n> \r\n> I would suggest to fix this in DS side, ie probable changing `(1 - input_mask).to(target_dtype) * minus_inf)` to something like `(~input_mask).to(target_type) * minus_inf`\r\n\r\nI think the DeepSpeed uses `(1 - input_mask).to(target_dtype) * minus_inf)` because their framework is tested based on the opt model. At the same time, many modeling_x.py files in transformers return int64\r\n",
"Hum the specific module is called `BloomSelfAttention` https://github.com/microsoft/DeepSpeed/blob/194053bd58947ac6a45363ba780c9dfb127d3064/deepspeed/ops/transformer/inference/ds_attention.py#L171",
"> Hum the specific module is called `BloomSelfAttention` https://github.com/microsoft/DeepSpeed/blob/194053bd58947ac6a45363ba780c9dfb127d3064/deepspeed/ops/transformer/inference/ds_attention.py#L171\r\n\r\nIt's a bug. I think...",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,683 | 1,687 | 1,687 |
NONE
| null |
### System Info
- `transformers` version: 4.27.1
- Platform: Linux-4.18.0-240.el8.x86_64-x86_64-with-glibc2.2.5
- Python version: 3.8.12
- Huggingface_hub version: 0.13.3
- PyTorch version (GPU?): 1.11.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: true
- Using distributed or parallel set-up in script?: true
### Who can help?
@thomasw21 @patrickvonplaten @sgugger
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```bash
# using deepspeedChat example but change the opt to bloomz-1b7
# deepspeedChat github: https://github.com/microsoft/DeepSpeedExamples/blob/master/applications/DeepSpeed-Chat/README.md
#!/bin/bash
# Copyright (c) Microsoft Corporation.
# SPDX-License-Identifier: Apache-2.0
# DeepSpeed Team
ACTOR_MODEL_PATH="bigscience/bloomz-1b7"
CRITIC_MODEL_PATH="bigscience/bloomz-1b7"
ACTOR_ZERO_STAGE=${3:-2}
CRITIC_ZERO_STAGE=${4:-2}
OUTPUT=${5:-'./output'}
NUM_GPUS=${6:-8}
NUM_NODES=${7:-1}
mkdir -p $OUTPUT
Num_Padding_at_Beginning=0 # this is model related
Actor_Lr=9.65e-6
Critic_Lr=5e-6
hostname='localhost'
export NCCL_SOCKET_IFNAME=eth
export NCCL_DEBUG=INFO
export TOKENIZERS_PARALLELISM=false
deepspeed --master_port 25303 --master_addr ${hostname} --num_gpus ${NUM_GPUS} --num_nodes ${NUM_NODES} --hostfile 'deepspeed_hostfile' main.py \
--data_path Dahoas/rm-static \
--data_split 2,4,4 \
--actor_model_name_or_path $ACTOR_MODEL_PATH \
--critic_model_name_or_path $CRITIC_MODEL_PATH \
--num_padding_at_beginning 1 \
--per_device_train_batch_size 1 \
--per_device_mini_train_batch_size 1 \
--generation_batch_numbers 1 \
--ppo_epochs 1 \
--max_answer_seq_len 256 \
--max_prompt_seq_len 256 \
--actor_learning_rate ${Actor_Lr} \
--critic_learning_rate ${Critic_Lr} \
--disable_actor_dropout \
--num_train_epochs 1 \
--lr_scheduler_type cosine \
--gradient_accumulation_steps 1 \
--num_warmup_steps 100 \
--deepspeed --seed 1234 \
--enable_hybrid_engine \
--inference_tp_size ${NUM_NODES} \
--tp_gather_partition_size ${NUM_GPUS} \
--actor_zero_stage $ACTOR_ZERO_STAGE \
--critic_zero_stage $CRITIC_ZERO_STAGE \
--actor_gradient_checkpointing \
--critic_gradient_checkpointing \
--output_dir $OUTPUT |&
tee $OUTPUT/training.log
```
the error is:
```
Traceback (most recent call last):
File "DeepSpeedExamples/applications/DeepSpeed-Chat/training/step3_rlhf_finetuning/main.py", line 562, in <module>
main()
File "DeepSpeedExamples/applications/DeepSpeed-Chat/training/step3_rlhf_finetuning/main.py", line 471, in main
out = trainer.generate_experience(prompts)
File "DeepSpeedExamples/applications/DeepSpeed-Chat/training/step3_rlhf_finetuning/ppo_trainer.py", line 97, in generate_experience
seq = self._generate_sequence(prompts)
File "DeepSpeedExamples/applications/DeepSpeed-Chat/training/step3_rlhf_finetuning/ppo_trainer.py", line 73, in _generate_sequence
seq = self.actor_model.module.generate(prompts,
File "/dcv/lib/python3.9/site-packages/deepspeed/runtime/hybrid_engine.py", line 245, in generate
generate_ret_vals = self._generate(*inputs, **kwargs)
File "/dcv/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/dcv/lib/python3.9/site-packages/transformers/generation/utils.py", line 1437, in generate
return self.greedy_search(
File "/dcv/lib/python3.9/site-packages/transformers/generation/utils.py", line 2248, in greedy_search
outputs = self(
File "/dcv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1208, in _call_impl
result = forward_call(*input, **kwargs)
File "/dcv/lib/python3.9/site-packages/transformers/models/bloom/modeling_bloom.py", line 913, in forward
transformer_outputs = self.transformer(
File "/dcv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1208, in _call_impl
result = forward_call(*input, **kwargs)
File "/dcv/lib/python3.9/site-packages/transformers/models/bloom/modeling_bloom.py", line 786, in forward
outputs = block(
File "/dcv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1208, in _call_impl
result = forward_call(*input, **kwargs)
File "/dcv/lib/python3.9/site-packages/deepspeed/model_implementations/transformers/ds_transformer.py", line 147, in forward
self.attention(input,
File "/dcv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/dcv/lib/python3.9/site-packages/deepspeed/ops/transformer/inference/ds_attention.py", line 160, in forward
context_layer, key_layer, value_layer = self.compute_attention(qkv_out=qkv_out,
File "/dcv/lib/python3.9/site-packages/deepspeed/ops/transformer/inference/ds_attention.py", line 253, in compute_attention
attn_mask=((1 - input_mask).half() * minus_inf),
File "/dcv/lib/python3.9/site-packages/torch/_tensor.py", line 39, in wrapped
return f(*args, **kwargs)
File "/dcv/lib/python3.9/site-packages/torch/_tensor.py", line 833, in __rsub__
return _C._VariableFunctions.rsub(self, other)
RuntimeError: Subtraction, the `-` operator, with a bool tensor is not supported. If you are trying to invert a mask, use the `~` or `logical_not()` operator instead.
```
I want to know why this pull : https://github.com/huggingface/transformers/pull/18141/files change the following code:
`expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask`
to:
`expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask | combined_attention_mask`
Because of the change, the `causal_mask` is the tensor.bool not tensor.int64
### Expected behavior
`causal_mask` is the tensor.int64 not tensor.bool
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23280/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23280/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23279
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23279/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23279/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23279/events
|
https://github.com/huggingface/transformers/issues/23279
| 1,704,690,915 |
I_kwDOCUB6oc5lm4Tj
| 23,279 |
xlm-roberta-xlarge doesn't exist
|
{
"login": "Jack000",
"id": 2636509,
"node_id": "MDQ6VXNlcjI2MzY1MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2636509?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jack000",
"html_url": "https://github.com/Jack000",
"followers_url": "https://api.github.com/users/Jack000/followers",
"following_url": "https://api.github.com/users/Jack000/following{/other_user}",
"gists_url": "https://api.github.com/users/Jack000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jack000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jack000/subscriptions",
"organizations_url": "https://api.github.com/users/Jack000/orgs",
"repos_url": "https://api.github.com/users/Jack000/repos",
"events_url": "https://api.github.com/users/Jack000/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jack000/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @Jack000 , thanks for reporting this issue. \r\n\r\nThis is indeed odd - the checkpoint being reference doesn't seem to exist. In the original PR adding the model to the library - it seems the [checkpoints were added under the facebook org](https://github.com/huggingface/transformers/pull/13727#pullrequestreview-866391378). It's OK if the checkpoint used in the example doesn't have the weights for the specific head e.g. for [multiple choice for bert we use `bert-base-uncased`](https://huggingface.co/docs/transformers/v4.29.1/en/model_doc/bert#transformers.BertForMultipleChoice). Would you like to open a PR to update the checkpoint? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,683 | 1,687 | 1,687 |
NONE
| null |
### System Info
from https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl
`model = XLMRobertaXLForSequenceClassification.from_pretrained("xlm-roberta-xlarge")
`
I can't find xlm-roberta-xlarge on https://huggingface.co/models
there's facebook/xlm-roberta-xl - but this is the raw masked token model and doesn't seem to work with XLMRobertaXLForSequenceClassification, it's missing the classifier head.
model = XLMRobertaForSequenceClassification.from_pretrained("xlm-roberta-large")
works just fine for me
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoTokenizer, XLMRobertaXLForSequenceClassification
model = XLMRobertaXLForSequenceClassification.from_pretrained("xlm-roberta-xlarge")
### Expected behavior
I guess xlm-roberta-xlarge should be available, or the docs should be amended..
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23279/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23279/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23278
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23278/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23278/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23278/events
|
https://github.com/huggingface/transformers/issues/23278
| 1,704,654,106 |
I_kwDOCUB6oc5lmvUa
| 23,278 |
AttributeError: module transformers.tools has no attribute DocumentQuestionAnsweringTool keeps appearing in transformers version 4.29.0
|
{
"login": "HappyData1",
"id": 133166150,
"node_id": "U_kgDOB-_0Rg",
"avatar_url": "https://avatars.githubusercontent.com/u/133166150?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HappyData1",
"html_url": "https://github.com/HappyData1",
"followers_url": "https://api.github.com/users/HappyData1/followers",
"following_url": "https://api.github.com/users/HappyData1/following{/other_user}",
"gists_url": "https://api.github.com/users/HappyData1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HappyData1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HappyData1/subscriptions",
"organizations_url": "https://api.github.com/users/HappyData1/orgs",
"repos_url": "https://api.github.com/users/HappyData1/repos",
"events_url": "https://api.github.com/users/HappyData1/events{/privacy}",
"received_events_url": "https://api.github.com/users/HappyData1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"In my case, re-installation solved the problem.\r\n\r\nI got an error:\r\n> Failed to import transformers.tools.agents ...`\r\n\r\nTry this:\r\n`$ pip install huggingface_hub>=0.14.1 git+https://github.com/huggingface/[email protected] diffusers accelerate datasets torch soundfile sentencepiece opencv-python openai`\r\n\r\nAnd restart the connected ipykernel.",
"I had the same issue and installed these packages one by one, it seems that the \"torch\" lib missing is what causes this exact error.\r\n\r\n> In my case, re-installation solved the problem.\r\n> \r\n> I got an error:\r\n> \r\n> > Failed to import transformers.tools.agents ...`\r\n> \r\n> Try this: `$ pip install huggingface_hub>=0.14.1 git+https://github.com/huggingface/[email protected] diffusers accelerate datasets torch soundfile sentencepiece opencv-python openai`\r\n> \r\n> And restart the connected ipykernel.\r\n\r\n",
"@yerimJu @vitorrm Thank you for your answers, it looks like it works with the one-by-one reinstallation of the packages!"
] | 1,683 | 1,683 | 1,683 |
NONE
| null |
### System Info
- `transformers` version: 4.29.0
- Platform: macOS-13.3.1-arm64-arm-64bit
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi together, I stumbled across the following error while trying the new Transformers Agent:
```python
from huggingface_hub import login
login('my_token')
```
...
Token is valid.
...
Login successful.
```python
from transformers import HfAgent
agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder")
```
As soon as I try to instantiate the agent, the following error appears:
```python
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/happydata/miniforge3/envs/huggingface-env/lib/python3.10/site-packages/transformers/tools/agents.py", line 469, in __init__
super().__init__(
File "/Users/happydata/miniforge3/envs/huggingface-env/lib/python3.10/site-packages/transformers/tools/agents.py", line 199, in __init__
_setup_default_tools()
File "/Users/happydata/miniforge3/envs/huggingface-env/lib/python3.10/site-packages/transformers/tools/agents.py", line 97, in _setup_default_tools
tool_class = getattr(tools_module, tool_class_name)
File "/Users/happydata/miniforge3/envs/huggingface-env/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1165, in __getattr__
raise AttributeError(f"module {self.__name__} has no attribute {name}")
AttributeError: module transformers.tools has no attribute DocumentQuestionAnsweringTool
```
In my case it didn't matter whether I tried BigCode, OpenAssistant or the OpenAiAgent.
### Expected behavior
I tried to follow the quickstart guide from https://huggingface.co/docs/transformers/transformers_agents, but it seems that the AttributeError with the transformers.tools module keeps appearing again and again.
Thank you for your help in advance!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23278/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23278/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23277
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23277/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23277/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23277/events
|
https://github.com/huggingface/transformers/pull/23277
| 1,704,603,782 |
PR_kwDOCUB6oc5QOp-f
| 23,277 |
Fix doctest files fetch issue
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"As said offline I don't think we need to revert urgently, we can just ignore the red check on main.",
"@sgugger FYI The doctest PR also has some problem running on our GitHub Actions runner. See error below.\r\nI will take a look, but this PR could be merged (without fixing the following issue) once you think the changes are good.\r\n\r\n\r\n```bash\r\n_____ ERROR collecting src/transformers/generation/configuration_utils.py ______\r\nimport file mismatch:\r\nimported module 'transformers.generation.configuration_utils' has this __file__ attribute:\r\n /transformers/src/transformers/generation/configuration_utils.py\r\nwhich is not the same as the test file we want to collect:\r\n /__w/transformers/transformers/src/transformers/generation/configuration_utils.py\r\nHINT: remove __pycache__ / .pyc files and/or use a unique basename for your test file modules\r\n```",
"> @sgugger FYI The doctest PR also has some problem running on our GitHub Actions runner. See error below. I will take a look, but this PR could be merged (without fixing the following issue) once you think the changes are good.\r\n> \r\n> ```shell\r\n> _____ ERROR collecting src/transformers/generation/configuration_utils.py ______\r\n> import file mismatch:\r\n> imported module 'transformers.generation.configuration_utils' has this __file__ attribute:\r\n> /transformers/src/transformers/generation/configuration_utils.py\r\n> which is not the same as the test file we want to collect:\r\n> /__w/transformers/transformers/src/transformers/generation/configuration_utils.py\r\n> HINT: remove __pycache__ / .pyc files and/or use a unique basename for your test file modules\r\n> ```\r\n\r\n@sgugger A short fix for this issue is given in [the last commit](https://github.com/huggingface/transformers/pull/23277/commits/bada5b3d616bff32f7440408428a4b9ed13c503b).\r\n\r\nThe reason is the file `conftest.py` has this line `from transformers.testing_utils import HfDoctestModule, HfDocTestParser` added in #23271. However, the `transformers` is installed during docker image build, which is different from the one when the CI is run.\r\n\r\nThis change should be applied to other workflow file, but it's rare that we have such imports in the codebase. I will do it in a separate PR.",
"Thanks for all the explanation!",
"Going to merge as the failing tests are irrelevant and I have tried to re-run for a few times."
] | 1,683 | 1,683 | 1,683 |
COLLABORATOR
| null |
Reverts huggingface/transformers#23271
Embarrassingly and unfortunately, the new job `tests_pr_documientation_tests` fails at the step `Get files to test` when the job is run on the `main` branch.
https://app.circleci.com/pipelines/github/huggingface/transformers/64235/workflows/54a99003-258e-4c2a-8366-b4461b3ec33f/jobs/794628/parallel-runs/0/steps/0-113
I will have to take a look - the log is not informative.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23277/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23277/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23277",
"html_url": "https://github.com/huggingface/transformers/pull/23277",
"diff_url": "https://github.com/huggingface/transformers/pull/23277.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23277.patch",
"merged_at": 1683818047000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23276
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23276/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23276/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23276/events
|
https://github.com/huggingface/transformers/pull/23276
| 1,704,433,186 |
PR_kwDOCUB6oc5QOGOM
| 23,276 |
`transformers-cli` -> `huggingface-cli`
|
{
"login": "AlpinDale",
"id": 52078762,
"node_id": "MDQ6VXNlcjUyMDc4NzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/52078762?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AlpinDale",
"html_url": "https://github.com/AlpinDale",
"followers_url": "https://api.github.com/users/AlpinDale/followers",
"following_url": "https://api.github.com/users/AlpinDale/following{/other_user}",
"gists_url": "https://api.github.com/users/AlpinDale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AlpinDale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AlpinDale/subscriptions",
"organizations_url": "https://api.github.com/users/AlpinDale/orgs",
"repos_url": "https://api.github.com/users/AlpinDale/repos",
"events_url": "https://api.github.com/users/AlpinDale/events{/privacy}",
"received_events_url": "https://api.github.com/users/AlpinDale/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"lgtm!"
] | 1,683 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
Leftover from the last PR - `transformers-cli` should be `huggingface-cli` now.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23276/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23276/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23276",
"html_url": "https://github.com/huggingface/transformers/pull/23276",
"diff_url": "https://github.com/huggingface/transformers/pull/23276.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23276.patch",
"merged_at": 1683799933000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23275
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23275/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23275/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23275/events
|
https://github.com/huggingface/transformers/pull/23275
| 1,704,422,113 |
PR_kwDOCUB6oc5QOD4F
| 23,275 |
Remove missplaced test file
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,683 | 1,683 | 1,683 |
COLLABORATOR
| null |
# What does this PR do?
I just stumbled unto this test file leaving in `src/transformers/data` which is never executed. Upon verification with @gante nothing inside of it is necessary as it predates the logit processors, so we can safely remove it.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23275/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23275/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23275",
"html_url": "https://github.com/huggingface/transformers/pull/23275",
"diff_url": "https://github.com/huggingface/transformers/pull/23275.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23275.patch",
"merged_at": 1683745806000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23274
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23274/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23274/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23274/events
|
https://github.com/huggingface/transformers/pull/23274
| 1,704,415,945 |
PR_kwDOCUB6oc5QOCjf
| 23,274 |
Fix link displayed for custom tools
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,683 | 1,683 | 1,683 |
COLLABORATOR
| null |
# What does this PR do?
This fixes the link displayed when a custom tool downloads code files from the Hub.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23274/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23274/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23274",
"html_url": "https://github.com/huggingface/transformers/pull/23274",
"diff_url": "https://github.com/huggingface/transformers/pull/23274.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23274.patch",
"merged_at": 1683745797000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23273
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23273/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23273/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23273/events
|
https://github.com/huggingface/transformers/pull/23273
| 1,704,365,770 |
PR_kwDOCUB6oc5QN3vD
| 23,273 |
replaced assert with raise ValueError for t5, switch_transformers, pix2struct, mt5, longt5, gptsan_japanese.
|
{
"login": "susnato",
"id": 56069179,
"node_id": "MDQ6VXNlcjU2MDY5MTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/susnato",
"html_url": "https://github.com/susnato",
"followers_url": "https://api.github.com/users/susnato/followers",
"following_url": "https://api.github.com/users/susnato/following{/other_user}",
"gists_url": "https://api.github.com/users/susnato/gists{/gist_id}",
"starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/susnato/subscriptions",
"organizations_url": "https://api.github.com/users/susnato/orgs",
"repos_url": "https://api.github.com/users/susnato/repos",
"events_url": "https://api.github.com/users/susnato/events{/privacy}",
"received_events_url": "https://api.github.com/users/susnato/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @sanchit-gandhi just pushed the change that you requested.",
"Awesome thanks - requesting a final review!",
"Hi @amyeroberts, just pushed the changes you requested! \r\nLet me know if any more changes are needed or not.",
"Merging as errors are unrelated to this PR and have been resolved on main"
] | 1,683 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
As said by @sanchit-gandhi, [here](https://github.com/huggingface/transformers/pull/21785#discussion_r1184787328) this PR replaces `assert` with `raise ValueError` for models - t5, switch_transformers, pix2struct, mt5, longt5, gptsan_japanese.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23273/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23273/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23273",
"html_url": "https://github.com/huggingface/transformers/pull/23273",
"diff_url": "https://github.com/huggingface/transformers/pull/23273.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23273.patch",
"merged_at": 1683916190000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23271
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23271/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23271/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23271/events
|
https://github.com/huggingface/transformers/pull/23271
| 1,704,289,594 |
PR_kwDOCUB6oc5QNnmq
| 23,271 |
Bring back the PR `Refactor doctests + add CI` to `main`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"> Can you put the new testing utils (from `doctest_utils`) in `testing_utils`, so it all goes in the same place?\r\n\r\nIn this case, am I allowed to put the import of `pytest` and `_pytest` on the top level of `testing_utils`? I am asking because I see in that file there is\r\n\r\n```python\r\n try:\r\n import pytest # We don't need a hard dependency on pytest in the main library\r\n except ImportError:\r\n return test_case\r\n```\r\n",
"There is no direct import into `testing_utils.py` so this should be fine to remove the try except (we will have until next release to make sure it doesn't create a new core dep of Transformers :sweat_smile: )",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23271). All of your documentation changes will be reflected on that endpoint.",
"Thanks for taking care of this! Think the filtered list could be obtain in a cleaner way with some bash commands but otherwise great 👍🏻 ",
"@ArthurZucker It was finally going to a `tests_fetcher.py`\r\n\r\nhttps://github.com/huggingface/transformers/pull/23277/files\r\n\r\nThe bash command was just getting too complex ...",
"Nice! Thanks for following up"
] | 1,683 | 1,684 | 1,683 |
COLLABORATOR
| null |
Reverts huggingface/transformers#23245
So we can keep the PR #22987 regarding the new doctest way, but without exposing `doctest_utils` to `src/transformers`.
@sgugger Let me know if you prefer to move this `doctest_utils.py` to `tests` folder.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23271/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23271/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23271",
"html_url": "https://github.com/huggingface/transformers/pull/23271",
"diff_url": "https://github.com/huggingface/transformers/pull/23271.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23271.patch",
"merged_at": 1683748849000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23270
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23270/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23270/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23270/events
|
https://github.com/huggingface/transformers/pull/23270
| 1,704,257,645 |
PR_kwDOCUB6oc5QNgxw
| 23,270 |
OPT/BioGPT: Improved attention mask shape exception
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,683 | 1,684 | 1,684 |
MEMBER
| null |
# What does this PR do?
Related exception: #23197
Currently, in OPT/BioGPT, if we don't pass an attention mask or if we pass an attention mask with a wrong shape, an exception will be printed in the attention layer: `Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {attention_mask.size()}`. This exception has the following problems:
1. It checks the expanded attention mask, not the attention mask as set by the user (or the default). Even as a maintainer, I can't immediately decode the error message, as I would need to know how the mask is expanded for the model in question.
2. If there is a bug computing `bsz`, `tgt_len`, or `src_len`, the exception will be misleading.
In #23197 we found that when the length of `past_key_values` is equal to the length of the `attention_mask`, `tgt_len` and `src_len` will be wrong (in at least these 2 models), triggering the exception with an incorrect message. This PR solves both issues: it prevents the incorrect computation of `tgt_len` and `src_len` by checking the shape of `attention_mask` in the main model class, printing a user-friendly message.
If this PR gets approved, I will add a similar check to the other models.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23270/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23270/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23270",
"html_url": "https://github.com/huggingface/transformers/pull/23270",
"diff_url": "https://github.com/huggingface/transformers/pull/23270.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23270.patch",
"merged_at": 1684241994000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23269
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23269/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23269/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23269/events
|
https://github.com/huggingface/transformers/pull/23269
| 1,704,176,300 |
PR_kwDOCUB6oc5QNPrx
| 23,269 |
Render custom tool docs a bit better
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,683 | 1,683 | 1,683 |
COLLABORATOR
| null |
# What does this PR do?
This PR disables syntax highlighting on the blocks that shouldn't have it.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23269/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23269/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23269",
"html_url": "https://github.com/huggingface/transformers/pull/23269",
"diff_url": "https://github.com/huggingface/transformers/pull/23269.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23269.patch",
"merged_at": 1683734301000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23268
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23268/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23268/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23268/events
|
https://github.com/huggingface/transformers/pull/23268
| 1,704,169,848 |
PR_kwDOCUB6oc5QNOSI
| 23,268 |
Convert numpy arrays to lists before saving the evaluation metrics as json
|
{
"login": "harisankar95",
"id": 58052269,
"node_id": "MDQ6VXNlcjU4MDUyMjY5",
"avatar_url": "https://avatars.githubusercontent.com/u/58052269?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harisankar95",
"html_url": "https://github.com/harisankar95",
"followers_url": "https://api.github.com/users/harisankar95/followers",
"following_url": "https://api.github.com/users/harisankar95/following{/other_user}",
"gists_url": "https://api.github.com/users/harisankar95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harisankar95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harisankar95/subscriptions",
"organizations_url": "https://api.github.com/users/harisankar95/orgs",
"repos_url": "https://api.github.com/users/harisankar95/repos",
"events_url": "https://api.github.com/users/harisankar95/events{/privacy}",
"received_events_url": "https://api.github.com/users/harisankar95/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@sgugger Can you please review this small update.",
"_The documentation is not available anymore as the PR was closed or merged._",
"> LGTM! Can you just run `make style` on your branch to fix the styling issue?\r\n\r\nyes, the styling issue is now fixed."
] | 1,683 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
eval_metrics contains:
- mean_iou: float
- mean_accuracy: float
- overall_accuracy: float
- per_category_iou: ndarray of shape (num_labels,)
- per_category_accuracy: ndarray of shape (num_labels,)
ndarrays are to be converted to lists for json serialization.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23268/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23268/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23268",
"html_url": "https://github.com/huggingface/transformers/pull/23268",
"diff_url": "https://github.com/huggingface/transformers/pull/23268.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23268.patch",
"merged_at": 1683809663000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23267
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23267/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23267/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23267/events
|
https://github.com/huggingface/transformers/pull/23267
| 1,704,146,638 |
PR_kwDOCUB6oc5QNJQ5
| 23,267 |
Fix new line bug in chat mode for agents
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23267). All of your documentation changes will be reflected on that endpoint."
] | 1,683 | 1,683 | 1,683 |
COLLABORATOR
| null |
# What does this PR do?
Depending on the agent used we might get too many new lines here. Just stripping everything and adding the right amount fixes this.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23267/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23267/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23267",
"html_url": "https://github.com/huggingface/transformers/pull/23267",
"diff_url": "https://github.com/huggingface/transformers/pull/23267.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23267.patch",
"merged_at": 1683731623000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23266
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23266/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23266/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23266/events
|
https://github.com/huggingface/transformers/pull/23266
| 1,704,091,483 |
PR_kwDOCUB6oc5QM9bX
| 23,266 |
Refine documentation for Tools
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,683 | 1,683 | 1,683 |
COLLABORATOR
| null |
# What does this PR do?
Refine a bit the documentation of agents and tools.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23266/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23266/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23266",
"html_url": "https://github.com/huggingface/transformers/pull/23266",
"diff_url": "https://github.com/huggingface/transformers/pull/23266.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23266.patch",
"merged_at": 1683731033000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23262
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23262/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23262/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23262/events
|
https://github.com/huggingface/transformers/issues/23262
| 1,704,048,868 |
I_kwDOCUB6oc5lkbjk
| 23,262 |
agent fail
|
{
"login": "ltm920716",
"id": 12007227,
"node_id": "MDQ6VXNlcjEyMDA3MjI3",
"avatar_url": "https://avatars.githubusercontent.com/u/12007227?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ltm920716",
"html_url": "https://github.com/ltm920716",
"followers_url": "https://api.github.com/users/ltm920716/followers",
"following_url": "https://api.github.com/users/ltm920716/following{/other_user}",
"gists_url": "https://api.github.com/users/ltm920716/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ltm920716/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ltm920716/subscriptions",
"organizations_url": "https://api.github.com/users/ltm920716/orgs",
"repos_url": "https://api.github.com/users/ltm920716/repos",
"events_url": "https://api.github.com/users/ltm920716/events{/privacy}",
"received_events_url": "https://api.github.com/users/ltm920716/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @sgugger ",
"Run normally meaning? I'm not even completely sure what you want the agent to run so I don't see how the LLM could find out too ;-)\r\nMake sure to:\r\n1. Use openAI, sadly it's better than the opensource alternatives\r\n2. refine your prompt input, we have a great guide for that [here](https://huggingface.co/docs/transformers/custom_tools#writing-good-user-inputs)",
"hi @sgugger ,\r\nI am sorry for my bad description.\r\n\r\nThere are two problems when I run the command `agent.run(\"here is a image named image, what does it belong to?\", image=x, remote=True)`\r\n\r\n1、The agent return tool `image_classifier ` which not in toolbox. According to the base run-prompt,it should only return tool in toolbox. So this is only because the llm's capacity?\r\n\r\n2、for tool like `image_classifier` or `image_caption`, it‘s input is `image`,what is the type of `image`? PIL or Numpy or str(local path)? \r\n\r\nthanks!\r\n",
"it seems that the param `remote=True` does not work, what tools can be loaded remotely?",
"The agent can return whatever the hell it wants. If it decides to use tools that don't exist, there is nothing we can do (again use openAI to get better results).\r\nThere is no image classifier tool. For tools working on images, the input type required is a standard PIL Image.\r\n\r\nAs for your last comment `remote=True` on all tools.",
"@sgugger \r\nthanks\r\n\r\n\r\n\r\nfor param `remote=True`, it seems does not work. What tools support remote and where I can search?\r\n",
"@ltm920716 there is a dataset that lists all the standard tools that use an endpoint for demonstration purpose, and then that you can use remotely : https://huggingface.co/datasets/huggingface-tools/default-endpoints "
] | 1,683 | 1,683 | 1,683 |
NONE
| null |
### System Info
transformers==4.30.0.dev0
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction

from transformers import HfAgent
agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder")
from PIL import Image
x = Image.open('test.jpg')
agent.run("here is a image named `image`, what does it belong to?", image=x, remote=True)
==Explanation from the agent==
I will use the following tool: `image_classifier` to classify the image.
==Code generated by the agent==
label = image_classifier(image)
print(f"The label is {label}.")
==Result==
Evaluation of the code stopped at line 0 before the end because of the following error:
It is not permitted to evaluate other functions than the provided tools (tried to execute image_classifier).
agent.run("here is a image named `image`, what does it belong to?", image='test.jpg', remote=True)
==Explanation from the agent==
I will use the following tool: `image_classifier` to classify the image.
==Code generated by the agent==
label = image_classifier(image)
print(f"The label is {label}.")
==Result==
Evaluation of the code stopped at line 0 before the end because of the following error:
It is not permitted to evaluate other functions than the provided tools (tried to execute image_classifier).
### Expected behavior
run normally
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23262/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23262/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23261
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23261/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23261/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23261/events
|
https://github.com/huggingface/transformers/pull/23261
| 1,703,939,759 |
PR_kwDOCUB6oc5QMbRX
| 23,261 |
Update Image segmentation description
|
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,683 | 1,683 | 1,683 |
MEMBER
| null | null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23261/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23261/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23261",
"html_url": "https://github.com/huggingface/transformers/pull/23261",
"diff_url": "https://github.com/huggingface/transformers/pull/23261.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23261.patch",
"merged_at": 1683725776000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23260
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23260/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23260/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23260/events
|
https://github.com/huggingface/transformers/pull/23260
| 1,703,913,992 |
PR_kwDOCUB6oc5QMVri
| 23,260 |
pin TF prob in docker files
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,683 | 1,683 | 1,683 |
COLLABORATOR
| null |
# What does this PR do?
Same as in #23220 but for docker file
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23260/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23260/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23260",
"html_url": "https://github.com/huggingface/transformers/pull/23260",
"diff_url": "https://github.com/huggingface/transformers/pull/23260.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23260.patch",
"merged_at": 1683728469000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23259
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23259/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23259/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23259/events
|
https://github.com/huggingface/transformers/pull/23259
| 1,703,857,791 |
PR_kwDOCUB6oc5QMJXY
| 23,259 |
Metadata update
|
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23259). All of your documentation changes will be reflected on that endpoint."
] | 1,683 | 1,683 | 1,683 |
MEMBER
| null |
Automatically updates the metadata to contain the `tool` tag.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23259/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23259/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23259",
"html_url": "https://github.com/huggingface/transformers/pull/23259",
"diff_url": "https://github.com/huggingface/transformers/pull/23259.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23259.patch",
"merged_at": 1683725108000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23258
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23258/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23258/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23258/events
|
https://github.com/huggingface/transformers/issues/23258
| 1,703,721,546 |
I_kwDOCUB6oc5ljLpK
| 23,258 |
Flaky Whisper PT-TF & PT-Flax Equivalence Test
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false | null |
[] |
[
"If this started happening recently, it might be related to https://github.com/huggingface/transformers/pull/21998\r\n\r\nIt's possible the feature extraction for PyTorch now gives different results than the TF / Flax versions. It shouldn't, but it's possible that a small difference in the preprocessed inputs is causing this."
] | 1,683 | 1,686 | null |
COLLABORATOR
| null |
### System Info
transformers 4.29.0 dev
### Who can help?
@ArthurZucker @sanchit-gandhi @ydshieh
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Flaky test, so not reproducible. Example run where the error occurred:
* https://app.circleci.com/pipelines/github/huggingface/transformers/64100/workflows/b4463c5d-b3dc-4b00-a7cd-19acd096cb07/jobs/792381
* https://app.circleci.com/pipelines/github/huggingface/transformers/64111/workflows/dc9092c4-0673-46c7-b89f-f805bc20128c/jobs/792557
* https://app.circleci.com/pipelines/github/huggingface/transformers/64230/workflows/e2d42ca4-f367-4a85-9054-a0ea99e49849/jobs/794534
Occasionally, the PT-TF and PT-Flax whisper equivalence test fails. The tolerance was increased in #23257 and #23288 but the reason for recent failures has not yet been found.
### Expected behaviour
Equivalence tests reliably pass.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23258/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23258/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/23257
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23257/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23257/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23257/events
|
https://github.com/huggingface/transformers/pull/23257
| 1,703,719,916 |
PR_kwDOCUB6oc5QLrPC
| 23,257 |
Temporary tolerance fix for flaky whipser PT-TF equiv. test
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Issue: #23258 ",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,683 | 1,683 | 1,683 |
COLLABORATOR
| null |
# What does this PR do?
The PT-TF whisper tests have recently become flaky e.g. for [this CI run](https://app.circleci.com/pipelines/github/huggingface/transformers/64100/workflows/b4463c5d-b3dc-4b00-a7cd-19acd096cb07/jobs/792381).
Although the differences are still relatively small, it represents ~2x on the largest absolute difference.
This PR temporarily increases the tolerance until the root cause is found. An issue will be opened and linked here for reference.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23257/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23257/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23257",
"html_url": "https://github.com/huggingface/transformers/pull/23257",
"diff_url": "https://github.com/huggingface/transformers/pull/23257.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23257.patch",
"merged_at": 1683795847000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23256
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23256/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23256/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23256/events
|
https://github.com/huggingface/transformers/pull/23256
| 1,703,697,023 |
PR_kwDOCUB6oc5QLmWY
| 23,256 |
[`gpt`] Gpt2 fix half precision causal mask
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,683 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
# What does this PR do?
Applies a similar fix than https://github.com/huggingface/transformers/issues/23136 but for GPT2.
To reproduce:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("gpt2", device_map="auto", load_in_8bit=True)
inputs = torch.LongTensor([[1, 1, 1], [1, 2, 1]]).to(0)
print(model(inputs))
```
The explanation is the same as the tagged PR:
> When going for low_cpu_mem_usage each parameter is force-casted to the expected dtype, which is force-set to torch.float16 for 8bit models.
> Therefore, for 8bit models (and also half-precision models) the causal mask is always force casted to float16 as it is part of the model's state dict, hence expected to be loaded from the Hub if the mask is available on the state dict.
> The fix is to add persistant=False and add a field _keys_to_ignore_on_unexpected (for removing the warnings) to avoid loading that causal mask from the state dict and assign it to the buffer, and all causal masks that are saved as buffers should do the same to avoid unexpected behaviors.
Some users reported that they were also able to reproduce on PyTorch main branch but without load_in_8bit, I didn't managed to reproduce that way, I will have a deeper look
cc @amyeroberts
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23256/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23256/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23256",
"html_url": "https://github.com/huggingface/transformers/pull/23256",
"diff_url": "https://github.com/huggingface/transformers/pull/23256.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23256.patch",
"merged_at": 1683790343000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23255
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23255/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23255/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23255/events
|
https://github.com/huggingface/transformers/pull/23255
| 1,703,671,107 |
PR_kwDOCUB6oc5QLgrA
| 23,255 |
Improve Docs of Custom Tools and Agents
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23255). All of your documentation changes will be reflected on that endpoint."
] | 1,683 | 1,683 | 1,683 |
MEMBER
| null |
# What does this PR do?
This PR improves the docs explaining how to customize prompts and corrects some grammar, spelling, code snippets of both `transformers_agent.mdx` and `custom_tools.mdx`. Also `agent.toolbox` is made a get method / property which should help both with documentation and forbid the user to overwrite the attribute completely.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23255/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23255/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23255",
"html_url": "https://github.com/huggingface/transformers/pull/23255",
"diff_url": "https://github.com/huggingface/transformers/pull/23255.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23255.patch",
"merged_at": 1683723326000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23254
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23254/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23254/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23254/events
|
https://github.com/huggingface/transformers/pull/23254
| 1,703,664,322 |
PR_kwDOCUB6oc5QLfMC
| 23,254 |
Making `safetensors` a core dependency.
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> If it is in the `install_requires` it doesn't need to be anywhere else.\r\n\r\n`tokenizers` is in the `install_requires` too, yet in a bunch of other places (I merely copied it). Isn't `tokenizers` a core dependency ? ",
"Yes it is. It was not added by me and way before I was asked to review PRs for Transformers ;-)",
"Linked blogpost : https://github.com/huggingface/blog/pull/1096",
"This means that weights are now always loaded in safetensors format but still saved in PyTorch format no? Think this is a good first step. Don't see a problem with having `safetensors` as a core dependency",
"> This means that weights are now always loaded in safetensors format but still saved in PyTorch format no? Think this is a good first step. Don't see a problem with having safetensors as a core dependency\r\n\r\nIndeed. \r\n\r\nThe next step will be saving in `safetensors` first. But we need to let time pass so that we can ensure a vast majority of users has `safetensors` so that users using a somewhat old transformers can still load new models (finetuned versions of existing models in old versions)",
"Merging !"
] | 1,683 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
Making `safetensors` a core dependency.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23254/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23254/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23254",
"html_url": "https://github.com/huggingface/transformers/pull/23254",
"diff_url": "https://github.com/huggingface/transformers/pull/23254.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23254.patch",
"merged_at": 1684847794000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23253
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23253/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23253/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23253/events
|
https://github.com/huggingface/transformers/issues/23253
| 1,703,609,572 |
I_kwDOCUB6oc5liwTk
| 23,253 |
KeyError: 'num_special_tokens_to_add'
|
{
"login": "elcolie",
"id": 18206728,
"node_id": "MDQ6VXNlcjE4MjA2NzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/18206728?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elcolie",
"html_url": "https://github.com/elcolie",
"followers_url": "https://api.github.com/users/elcolie/followers",
"following_url": "https://api.github.com/users/elcolie/following{/other_user}",
"gists_url": "https://api.github.com/users/elcolie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elcolie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elcolie/subscriptions",
"organizations_url": "https://api.github.com/users/elcolie/orgs",
"repos_url": "https://api.github.com/users/elcolie/repos",
"events_url": "https://api.github.com/users/elcolie/events{/privacy}",
"received_events_url": "https://api.github.com/users/elcolie/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @elcolie, \r\n\r\nThe error is arising because `TextDataset` takes a `tokenizer` object as its first argument for instantiation. `train_encodings` is a dictionary containing the input ids and attention mask for the text input \"shakespeare.txt\" to be fed to the model. This is what you want: \r\n\r\n```\r\ntrain_dataset = TextDataset(tokenizer, file_path=train_file_path, block_size=512)\r\n```\r\n\r\nNote, TextDataset is deprecated and will soon be removed from the library. Preprocessing of datasets should be handled with the 🤗 Datasets library. You can see examples of how to use it in our [example scripts](https://github.com/huggingface/transformers/tree/main/examples) e.g. [this one for language modeling](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py).",
"@amyeroberts \r\nI got new error. Thank you :)\r\n\r\n```bash\r\nTraceback (most recent call last):\r\n File \"/Users/sarit/study/gpt4all/gpt2_fine_tune.py\", line 37, in <module>\r\n trainer.train()\r\n File \"/Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/trainer.py\", line 1662, in train\r\n return inner_training_loop(\r\n File \"/Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/trainer.py\", line 1929, in _inner_training_loop\r\n tr_loss_step = self.training_step(model, inputs)\r\n File \"/Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/trainer.py\", line 2692, in training_step\r\n inputs = self._prepare_inputs(inputs)\r\n File \"/Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/trainer.py\", line 2639, in _prepare_inputs\r\n raise ValueError(\r\nValueError: The batch received was empty, your model won't be able to train on it. Double-check that your training dataset contains keys expected by the model: input_ids,past_key_values,attention_mask,token_type_ids,position_ids,head_mask,inputs_embeds,encoder_hidden_states,encoder_attention_mask,labels,use_cache,output_attentions,output_hidden_states,return_dict,label,label_ids,labels.\r\n╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮\r\n│ /Users/sarit/study/gpt4all/gpt2_fine_tune.py:37 in <module> │\r\n│ │\r\n│ 34 ) │\r\n│ 35 │\r\n│ 36 # Step 6: Train the model │\r\n│ ❱ 37 trainer.train() │\r\n│ 38 │\r\n│ │\r\n│ /Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/trainer.py:1662 in train │\r\n│ │\r\n│ 1659 │ │ inner_training_loop = find_executable_batch_size( │\r\n│ 1660 │ │ │ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size │\r\n│ 1661 │ │ ) │\r\n│ ❱ 1662 │ │ return inner_training_loop( │\r\n│ 1663 │ │ │ args=args, │\r\n│ 1664 │ │ │ resume_from_checkpoint=resume_from_checkpoint, │\r\n│ 1665 │ │ │ trial=trial, │\r\n│ │\r\n│ /Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/trainer.py:1929 in │\r\n│ _inner_training_loop │\r\n│ │\r\n│ 1926 │ │ │ │ │ with model.no_sync(): │\r\n│ 1927 │ │ │ │ │ │ tr_loss_step = self.training_step(model, inputs) │\r\n│ 1928 │ │ │ │ else: │\r\n│ ❱ 1929 │ │ │ │ │ tr_loss_step = self.training_step(model, inputs) │\r\n│ 1930 │ │ │ │ │\r\n│ 1931 │ │ │ │ if ( │\r\n│ 1932 │ │ │ │ │ args.logging_nan_inf_filter │\r\n│ │\r\n│ /Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/trainer.py:2692 in │\r\n│ training_step │\r\n│ │\r\n│ 2689 │ │ │ `torch.Tensor`: The tensor with training loss on this batch. │\r\n│ 2690 │ │ \"\"\" │\r\n│ 2691 │ │ model.train() │\r\n│ ❱ 2692 │ │ inputs = self._prepare_inputs(inputs) │\r\n│ 2693 │ │ │\r\n│ 2694 │ │ if is_sagemaker_mp_enabled(): │\r\n│ 2695 │ │ │ loss_mb = smp_forward_backward(model, inputs, self.args.gradient_accumulatio │\r\n│ │\r\n│ /Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/trainer.py:2639 in │\r\n│ _prepare_inputs │\r\n│ │\r\n│ 2636 │ │ \"\"\" │\r\n│ 2637 │ │ inputs = self._prepare_input(inputs) │\r\n│ 2638 │ │ if len(inputs) == 0: │\r\n│ ❱ 2639 │ │ │ raise ValueError( │\r\n│ 2640 │ │ │ │ \"The batch received was empty, your model won't be able to train on it. │\r\n│ 2641 │ │ │ │ f\"training dataset contains keys expected by the model: {','.join(self._ │\r\n│ 2642 │ │ │ ) │\r\n╰──────────────────────────────────────────────────────────────────────────────────────────────────╯\r\nValueError: The batch received was empty, your model won't be able to train on it. Double-check that your training dataset contains keys expected by the model:\r\ninput_ids,past_key_values,attention_mask,token_type_ids,position_ids,head_mask,inputs_embeds,encoder_hidden_states,encoder_attention_mask,labels,use_cache,output_attentions,output_hidden_st\r\nates,return_dict,label,label_ids,labels.\r\n```\r\n"
] | 1,683 | 1,683 | 1,683 |
NONE
| null |
### System Info
transformers==4.28.1
M2 MBP
OSX 13.2
Python 3.10.10
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel, Trainer, TrainingArguments, TextDataset
# Step 1: Load the pre-trained GPT-2 model
model = GPT2LMHeadModel.from_pretrained('gpt2')
# Step 2: Tokenize the training data
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
train_file_path = 'shakespeare.txt'
train_encodings = tokenizer(train_file_path)
# Step 3: Prepare the training data
train_dataset = TextDataset(train_encodings, file_path=train_file_path, block_size=512)
# Step 4: Create a TrainingArguments object
training_args = TrainingArguments(
output_dir='./results',
num_train_epochs=3,
per_device_train_batch_size=2,
per_device_eval_batch_size=2,
warmup_steps=500,
weight_decay=0.01,
logging_dir='./logs',
logging_steps=1000,
save_steps=5000,
evaluation_strategy='steps',
eval_steps=5000,
load_best_model_at_end=True
)
# Step 5: Instantiate a Trainer object
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset
)
# Step 6: Train the model
trainer.train()
```
[shakespeare.txt](https://github.com/huggingface/transformers/files/11440814/shakespeare.txt)
### Expected behavior
Successful fine tune the model.
**As is:**<br>
```bash
warnings.warn(
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:248 │
│ in __getattr__ │
│ │
│ 245 │ │
│ 246 │ def __getattr__(self, item: str): │
│ 247 │ │ try: │
│ ❱ 248 │ │ │ return self.data[item] │
│ 249 │ │ except KeyError: │
│ 250 │ │ │ raise AttributeError │
│ 251 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
KeyError: 'num_special_tokens_to_add'
During handling of the above exception, another exception occurred:
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /Users/sarit/study/gpt4all/gpt2_fine_tune.py:12 in <module> │
│ │
│ 9 train_encodings = tokenizer(train_file_path) │
│ 10 │
│ 11 # Step 3: Prepare the training data │
│ ❱ 12 train_dataset = TextDataset(train_encodings, file_path=train_file_path, block_size=512) │
│ 13 │
│ 14 # Step 4: Create a TrainingArguments object │
│ 15 training_args = TrainingArguments( │
│ │
│ /Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/data/datasets/language_modelin │
│ g.py:62 in __init__ │
│ │
│ 59 │ │ if os.path.isfile(file_path) is False: │
│ 60 │ │ │ raise ValueError(f"Input file path {file_path} not found") │
│ 61 │ │ │
│ ❱ 62 │ │ block_size = block_size - tokenizer.num_special_tokens_to_add(pair=False) │
│ 63 │ │ │
│ 64 │ │ directory, filename = os.path.split(file_path) │
│ 65 │ │ cached_features_file = os.path.join( │
│ │
│ /Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:250 │
│ in __getattr__ │
│ │
│ 247 │ │ try: │
│ 248 │ │ │ return self.data[item] │
│ 249 │ │ except KeyError: │
│ ❱ 250 │ │ │ raise AttributeError │
│ 251 │ │
│ 252 │ def __getstate__(self): │
│ 253 │ │ return {"data": self.data, "encodings": self._encodings} │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23253/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23253/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23252
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23252/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23252/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23252/events
|
https://github.com/huggingface/transformers/pull/23252
| 1,703,529,519 |
PR_kwDOCUB6oc5QLBzS
| 23,252 |
Add document-question-answering in task_summary
|
{
"login": "y3sar",
"id": 16244698,
"node_id": "MDQ6VXNlcjE2MjQ0Njk4",
"avatar_url": "https://avatars.githubusercontent.com/u/16244698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/y3sar",
"html_url": "https://github.com/y3sar",
"followers_url": "https://api.github.com/users/y3sar/followers",
"following_url": "https://api.github.com/users/y3sar/following{/other_user}",
"gists_url": "https://api.github.com/users/y3sar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/y3sar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/y3sar/subscriptions",
"organizations_url": "https://api.github.com/users/y3sar/orgs",
"repos_url": "https://api.github.com/users/y3sar/repos",
"events_url": "https://api.github.com/users/y3sar/events{/privacy}",
"received_events_url": "https://api.github.com/users/y3sar/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@stevhliu I only changed the docs. I don't know why the checks keep failing. It says it is having problems with ffmpeg?",
"Thank you so much for you help @stevhliu I think I should open a new PR. I made a mess here",
"I created a new pull request #23318 closing this PR"
] | 1,683 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
# What does this PR do?
From issue #18926
This PR adds Document Question Answering summary to task_summary.mdx
It also provides an example using pipeline and this [model](https://huggingface.co/naver-clova-ix/donut-base-finetuned-docvqa)
## Who can review?
@stevhliu
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23252/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23252/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23252",
"html_url": "https://github.com/huggingface/transformers/pull/23252",
"diff_url": "https://github.com/huggingface/transformers/pull/23252.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23252.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23251
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23251/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23251/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23251/events
|
https://github.com/huggingface/transformers/issues/23251
| 1,703,456,558 |
I_kwDOCUB6oc5liK8u
| 23,251 |
Check for Bool instead of Optionals
|
{
"login": "seboslaw",
"id": 97770,
"node_id": "MDQ6VXNlcjk3Nzcw",
"avatar_url": "https://avatars.githubusercontent.com/u/97770?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seboslaw",
"html_url": "https://github.com/seboslaw",
"followers_url": "https://api.github.com/users/seboslaw/followers",
"following_url": "https://api.github.com/users/seboslaw/following{/other_user}",
"gists_url": "https://api.github.com/users/seboslaw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/seboslaw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seboslaw/subscriptions",
"organizations_url": "https://api.github.com/users/seboslaw/orgs",
"repos_url": "https://api.github.com/users/seboslaw/repos",
"events_url": "https://api.github.com/users/seboslaw/events{/privacy}",
"received_events_url": "https://api.github.com/users/seboslaw/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @seboslaw, thanks for raising this issue. \r\n\r\nI don't believe the logic there is checking for `None` values. In [L2850](https://github.com/huggingface/transformers/blame/3335724376319a0c453049d0cd883504f530ff52/src/transformers/generation/utils.py#L2850), `return_dict_in_generate` is set to either the bool value passed in, or defaults to the bool value in the config if unset / is None. The same happens to `output_scores` in [L2843](https://github.com/huggingface/transformers/blame/3335724376319a0c453049d0cd883504f530ff52/src/transformers/generation/utils.py#LL2843C18-L2843C18). \r\n\r\nHowever, even if it's less clear than an explicit `None` check e.g. `if x is None`, the `beam_indices` logic should only ever be executed if both `return_dict_in_generate` and `output_scores` both evaluate to `True` e.g. the following will only print out `True, True`. \r\n\r\n```\r\nfor a, b in (\r\n (None, None), (None, False), (False, None), (True, None), (None, True), (True, False), (False, True), (True, True)\r\n):\r\n if a and b:\r\n print(a, b)\r\n```\r\n\r\ni.e. if the `beam_indices` line is still executing when both values are being set to `False` there's a probably a bug somewhere. Could you follow the issue template and provide details such that we can help debug this? Specifically:\r\n\r\n* The running environment: run `transformers-cli env` in the terminal and copy-paste the output\r\n* A minimal reproducible code snippet we can copy and run to replicate the issue? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,683 | 1,687 | 1,687 |
NONE
| null |
Hey guys,
forgive me if this note is naive (I'm not a Python professional) but during debugging the code I got thrown off by this line:
```
if return_dict_in_generate and output_scores:
beam_indices = tuple((beam_indices[beam_idx[i]] + (beam_idx[i],) for i in range(len(beam_indices))))
```
https://github.com/huggingface/transformers/blame/3335724376319a0c453049d0cd883504f530ff52/src/transformers/generation/utils.py#L2976
It seems like you're simply checking whether `return_dict_in_generate` and `output_scores` are not `None` instead of checking the underlying `Bool`s. I assume this is intended, correct? I'm asking because I passed `False` for both values and was wondering why it would still run the `beam_indices = tuple...` line.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23251/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23251/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23250
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23250/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23250/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23250/events
|
https://github.com/huggingface/transformers/issues/23250
| 1,703,226,917 |
I_kwDOCUB6oc5lhS4l
| 23,250 |
skip_special_tokens has different behavior between slow and fast tokenizer
|
{
"login": "BuxianChen",
"id": 30834226,
"node_id": "MDQ6VXNlcjMwODM0MjI2",
"avatar_url": "https://avatars.githubusercontent.com/u/30834226?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BuxianChen",
"html_url": "https://github.com/BuxianChen",
"followers_url": "https://api.github.com/users/BuxianChen/followers",
"following_url": "https://api.github.com/users/BuxianChen/following{/other_user}",
"gists_url": "https://api.github.com/users/BuxianChen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BuxianChen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BuxianChen/subscriptions",
"organizations_url": "https://api.github.com/users/BuxianChen/orgs",
"repos_url": "https://api.github.com/users/BuxianChen/repos",
"events_url": "https://api.github.com/users/BuxianChen/events{/privacy}",
"received_events_url": "https://api.github.com/users/BuxianChen/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] |
closed
| false |
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"I'd like to confirm my understandings to the concept, since the [PR 23312](https://github.com/huggingface/transformers/pull/23312) is in progressing:\r\n\r\nIn 🤗 Transformers, for both slow and fast tokenizers, there are only two types of tokens:\r\n\r\n- ***normal tokens***: these tokens can be split. These tokens cannot be add, but when `add_tokens(tokens, special_tokens=True)` be called, and the `tokens` to be added already in ***normal tokens***, in this case, they will be marked as ***special tokens*** and will not be split.\r\n- ***special tokens***: these tokens cannot be split, include:\r\n - `eos_token`, `bos_token`, ..., `additional_special_tokens`, which are defined in `SpecialTokensMixin`\r\n - user add tokens via `add_tokens(tokens)`. (1) When set the parameter `special_tokens=False`, if a token in `tokens` already in ***normal tokens***, do nothing to the token; (2) When set the parameter `special_tokens=False`, if a token in `tokens` already in ***normal tokens***, mark it as ***special tokens*** and will not be split;\r\n\r\nIn both slow and fast tokenizer, `tokenizer.decode(ids, skip_special_tokens=True)` will skip all ***special tokens***.\r\n\r\nPlease let me know if there are any misunderstandings.",
"cc @younesbelkada ",
"Hey! Thanks for reporting this! \r\n- Differences between fast and slow are sometimes bugs, sometimes features, which is what makes it a bit complicated. \r\n\r\nNow about the core of the issue, you have a good grasp of what is going on, good job! 🤗 And thanks for taking the time to dig in. T5 is a bit of a special case because it uses a hack in the `_convert_token_to_ids` method. \r\n\r\nThe core issue is that the `additional_special_tokens` list and the `added_specilal_tokens` encoder and decoder are not perfectly linked. Updating one does not update the other, which is a bug. Documentation is also rather scarce on how we use the `additional_special_tokens`, I am trying to regroup issues linked to that to create a proper fix. Will have a look at the PR!\r\n \r\n",
"One thing is that some of the added tokens can be `non special tokens`, which is why you have: \r\n- normal tokens ( from the original vocab file of the SPModel for example)\r\n- special tokens (which can be added int he additional special tokens or, control tokens which are class attributes) that behave the same \r\n- added normal tokens, which should not be split, and have their own index. These are useful when a token is missing from the spmodel, which you can never touch. ",
"Thanks for your reply, so the example for slow and fast tokenizer, which behavior is expected?\r\n\r\n> ### System Info\r\n> * `transformers` version: 4.26.1\r\n> * Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.31\r\n> * Python version: 3.9.16\r\n> * Huggingface_hub version: 0.12.1\r\n> * PyTorch version (GPU?): 1.12.1+cu113 (True)\r\n> * Tensorflow version (GPU?): not installed (NA)\r\n> * Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n> * Jax version: not installed\r\n> * JaxLib version: not installed\r\n> * Using GPU in script?: No\r\n> * Using distributed or parallel set-up in script?: No\r\n> \r\n> ### Who can help?\r\n> @ArthurZucker\r\n> \r\n> ### Information\r\n> * [ ] The official example scripts\r\n> * [x] My own modified scripts\r\n> \r\n> ### Tasks\r\n> * [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\r\n> * [x] My own task or dataset (give details below)\r\n> \r\n> ### Reproduction\r\n> Hi, recently, I find some subtle difference between slow tokenizer and fast tokenizer, Here is a example\r\n> \r\n> ```python\r\n> from transformers import AutoTokenizer, T5Tokenizer\r\n> path = \"t5-small\"\r\n> text = \"this is a ஐ apple\"\r\n> \r\n> fast_tokenizer = AutoTokenizer.from_pretrained(path)\r\n> num = fast_tokenizer.add_tokens([\"ஐ\"], special_tokens=True)\r\n> assert num == 1\r\n> ids = fast_tokenizer(text)[\"input_ids\"]\r\n> fast_tokenizer.decode(ids, skip_special_tokens=True) # 'this is a apple'\r\n> \r\n> slow_tokenizer = T5Tokenizer.from_pretrained(path)\r\n> num = slow_tokenizer.add_tokens([\"ஐ\"], special_tokens=True)\r\n> assert num == 1\r\n> ids = slow_tokenizer(text)[\"input_ids\"]\r\n> slow_tokenizer.decode(ids, skip_special_tokens=True) # 'this is a ஐ apple'\r\n> ```\r\n> \r\n> Here are more informations about the issue, I'm not a native English speaker, hope to be understood.\r\n> \r\n> * I know in the first situation, fast tokenizer utilizes 🤗 Tokenizer, which will invoke `tokenizers.Tokenizer.add_special_tokens(tokens)`, thus the token `ஐ` will be added to vocabulary, and be viewed as \"special token\", and [never be processed by tokenizer.model](https://huggingface.co/docs/tokenizers/api/tokenizer#tokenizers.Tokenizer.add_special_tokens).\r\n> * In the second situation, when decoding, slow tokenizer treats the added token `ஐ` as \"normal token\", so it will not be skipped. By the way, I read the related source code, when `skip_special_tokens=True`, slow tokenizer only skip `self.all_special_ids`, but `ஐ` is not stored in this, but `self.added_tokens_encoder`.\r\n> \r\n> I read some 🤗 official documents, and struggled to figure out the meaning of so called \"special token\", and realize it's a subtle concept, here is my thought: Tokens can be divided to these categories:\r\n> \r\n> * normal tokens: these tokens can be split\r\n> * control tokens (the name inspired by [SentencePiece](https://github.com/google/sentencepiece/blob/master/python/sentencepiece_python_module_example.ipynb)): `bos_token`, `eos_token`, ..., `additional_special_tokens`, the major propose of these tokens is used in encode **[post-processing](https://huggingface.co/docs/tokenizers/pipeline)** pipeline. When these tokens appeared in input text, in slow tokenizer situation, **in most cases**, these tokens also be included in `self.unique_no_split_tokens`, so these tokens **will not be split**, but I don't know the treatment in fast tokenizer case.\r\n> * user add tokens:\r\n> \r\n> * If the token already in vocab, but it can be marked as \"special token\", and this token will never be split now (but cannot be treated as the same as control tokens in some subtle situation).\r\n> * If the token not in vocab, it will be added (allocate a new token_id to it), this token also will never be split.\r\n> so, in both cases, these user added tokens will never be split.\r\n> \r\n> Please let me know if there are any misunderstandings.\r\n> \r\n> Several weeks ago, I summit a [issue 23001](https://github.com/huggingface/transformers/issues/23001) related to `return_overflowing_tokens` behavior, which is considered as a specific feature of fast tokenizer, so it's a feature not a bug. Generally, I want to know the differences between slow and fast tokenizer, should be viewed as features, or bugs.\r\n> \r\n> ### Expected behavior\r\n> The slow tokenizer should behave same as fast tokenizer.\r\n\r\n",
"In this case, the `fast` is correct: when we ask to skip special tokens when decoding, we expect all the special tokens to be skipped. ",
"It will be addressed in the linked PR. This is mostly due to the fact that the slow tokenizer was not properly added to the list of `additional_special_tokens` when being added using `add_tokens`. The refactoring will prevent this from happening!",
"PR will be merged this week! "
] | 1,683 | 1,695 | 1,695 |
NONE
| null |
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hi, recently, I find some subtle difference between slow tokenizer and fast tokenizer, Here is a example
```python
from transformers import AutoTokenizer, T5Tokenizer
path = "t5-small"
text = "this is a ஐ apple"
fast_tokenizer = AutoTokenizer.from_pretrained(path)
num = fast_tokenizer.add_tokens(["ஐ"], special_tokens=True)
assert num == 1
ids = fast_tokenizer(text)["input_ids"]
fast_tokenizer.decode(ids, skip_special_tokens=True) # 'this is a apple'
slow_tokenizer = T5Tokenizer.from_pretrained(path)
num = slow_tokenizer.add_tokens(["ஐ"], special_tokens=True)
assert num == 1
ids = slow_tokenizer(text)["input_ids"]
slow_tokenizer.decode(ids, skip_special_tokens=True) # 'this is a ஐ apple'
```
Here are more informations about the issue, I'm not a native English speaker, hope to be understood.
- I know in the first situation, fast tokenizer utilizes 🤗 Tokenizer, which will invoke `tokenizers.Tokenizer.add_special_tokens(tokens)`, thus the token `ஐ` will be added to vocabulary, and be viewed as "special token", and [never be processed by tokenizer.model](https://huggingface.co/docs/tokenizers/api/tokenizer#tokenizers.Tokenizer.add_special_tokens).
- In the second situation, when decoding, slow tokenizer treats the added token `ஐ` as "normal token", so it will not be skipped. By the way, I read the related source code, when `skip_special_tokens=True`, slow tokenizer only skip `self.all_special_ids`, but `ஐ` is not stored in this, but `self.added_tokens_encoder`.
I read some 🤗 official documents, and struggled to figure out the meaning of so called "special token", and realize it's a subtle concept, here is my thought: Tokens can be divided to these categories:
- normal tokens: these tokens can be split
- control tokens (the name inspired by [SentencePiece](https://github.com/google/sentencepiece/blob/master/python/sentencepiece_python_module_example.ipynb)): `bos_token`, `eos_token`, ..., `additional_special_tokens`, the major propose of these tokens is used in encode **[post-processing](https://huggingface.co/docs/tokenizers/pipeline)** pipeline. When these tokens appeared in input text, in slow tokenizer situation, **in most cases**, these tokens also be included in `self.unique_no_split_tokens`, so these tokens **will not be split**, but I don't know the treatment in fast tokenizer case.
- user add tokens:
- If the token already in vocab, but it can be marked as "special token", and this token will never be split now (but cannot be treated as the same as control tokens in some subtle situation).
- If the token not in vocab, it will be added (allocate a new token_id to it), this token also will never be split.
so, in both cases, these user added tokens will never be split.
Please let me know if there are any misunderstandings.
Several weeks ago, I summit a [issue 23001](https://github.com/huggingface/transformers/issues/23001) related to `return_overflowing_tokens` behavior, which is considered as a specific feature of fast tokenizer, so it's a feature not a bug. Generally, I want to know the differences between slow and fast tokenizer, should be viewed as features, or bugs.
### Expected behavior
The slow tokenizer should behave same as fast tokenizer.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23250/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23250/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23249
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23249/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23249/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23249/events
|
https://github.com/huggingface/transformers/issues/23249
| 1,703,091,175 |
I_kwDOCUB6oc5lgxvn
| 23,249 |
Every call to the generate method will repeatedly print "Generate config {config}" on the console
|
{
"login": "Silypie",
"id": 52056975,
"node_id": "MDQ6VXNlcjUyMDU2OTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/52056975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Silypie",
"html_url": "https://github.com/Silypie",
"followers_url": "https://api.github.com/users/Silypie/followers",
"following_url": "https://api.github.com/users/Silypie/following{/other_user}",
"gists_url": "https://api.github.com/users/Silypie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Silypie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Silypie/subscriptions",
"organizations_url": "https://api.github.com/users/Silypie/orgs",
"repos_url": "https://api.github.com/users/Silypie/repos",
"events_url": "https://api.github.com/users/Silypie/events{/privacy}",
"received_events_url": "https://api.github.com/users/Silypie/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @gante ",
"Hey @Silypie 👋 \r\n\r\nConsider the following points:\r\n1. `logger.info` messages are not printed by default\r\n2. Logging with `info` on `from_dict` configuration methods is also standard across the library\r\n3. This line should only be reached from `.generate()` in a legacy setup\r\n\r\nBecause of these 3 points, I'm biased toward not changing this behavior. Nevertheless, it may be a bug -- can you share a short stand-alone script so I can reproduce the issue?"
] | 1,683 | 1,683 | 1,683 |
NONE
| null |
https://github.com/huggingface/transformers/blob/3335724376319a0c453049d0cd883504f530ff52/src/transformers/generation/configuration_utils.py#L577
Is it feasible to delete this line of code? Or is there a better way?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23249/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23249/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23248
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23248/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23248/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23248/events
|
https://github.com/huggingface/transformers/issues/23248
| 1,703,030,820 |
I_kwDOCUB6oc5lgjAk
| 23,248 |
Incorrect preprocessing in run_t5_mlm_flax.py
|
{
"login": "BSharmi",
"id": 6493020,
"node_id": "MDQ6VXNlcjY0OTMwMjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6493020?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BSharmi",
"html_url": "https://github.com/BSharmi",
"followers_url": "https://api.github.com/users/BSharmi/followers",
"following_url": "https://api.github.com/users/BSharmi/following{/other_user}",
"gists_url": "https://api.github.com/users/BSharmi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BSharmi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BSharmi/subscriptions",
"organizations_url": "https://api.github.com/users/BSharmi/orgs",
"repos_url": "https://api.github.com/users/BSharmi/repos",
"events_url": "https://api.github.com/users/BSharmi/events{/privacy}",
"received_events_url": "https://api.github.com/users/BSharmi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @BSharmi, \r\n\r\nThe example code here isn't exactly the same as the `run_t5_mlm_flax.py` script and is missing some important lines. For example, [this one](https://github.com/huggingface/transformers/blob/291c5e9b256ad3ae970f8ef47d1693f3ae976a6e/examples/flax/language-modeling/run_t5_mlm_flax.py#LL660C10-L660C10), which enforces the length of the returned sequences from the tokenizer. Additionally, the model being imported is a pytorch model - `T5ForConditionalGeneration` - wheras the flax equivalent would be required for the flax script: `FlaxT5ForConditionalGeneration`. \r\n\r\nI would first make sure you can run the script with the [example snippet from the README](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#train-model-2) and then start to adapt to the new use case. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,683 | 1,687 | 1,687 |
NONE
| null |
### System Info
Hi there!
I am running the run_t5_mlm_flax.py as is and noticed this error
`ValueError: `input_ids` are incorrectly preprocessed. `input_ids` length is 922, but should be 1024.`
while running
```
for step, batch_idx in enumerate(tqdm(train_batch_idx, desc="Training...", position=1)):
samples = [tokenized_datasets["train"][int(idx)] for idx in batch_idx]
model_inputs = data_collator(samples)```
I see the following output from a demo e.g.
`print(tokenizer.batch_decode(input_ids, skip_special_tokens=False)[0])`
gives
`<unk> file_gif></s> Stacy: OK Stacy: let's go over the list one more time! Angelica: haha, you don't have to do it... Stacy:<extra_id_0>? of course I do!<extra_id_1> acy: I'm your maid of honor<extra_id_2> OK ;* <unk> <extra_id_3> of songs for the DJ? Angelica: sent Stacy: Flower arrangements and your bouquet? Angelica: done Stacy: comfortable shoes for after midnight? Angelica: bought and packed Stacy: Make-up and hair? Angelica: scheduled Angelica: they'll be at my place at 10 am St<extra_id_4> How much time<extra_id_5> we need? Angelica: around 3 hours the both of us? Stacy: OK, that gives us enough time to go get dressed and get to the church Angelica: Yeah, my mom wants to have her hair done as well, but we'll go seperate<extra_id_6> OK Angelica: anything else? Stacy<extra_id_7> ica: Nick's got them Stacy: Remind him to BRING them :D Angelica: maybe you're right... ;)<extra_id_8> in: Have u watched<extra_id_9>? Izayah: No what's it about? Westin: I don't know yet Izayah: So why are you asking? Westin: Haha I wanna watch this but it's a fantasy movie Izayah<extra_id_10> Hmm not into such movies Westin: Neither me but this one<extra_id_11> yah: Enjoy Westin<extra_id_12> Thanks anyway I<extra_id_13> </s> Macy: when is the deadline for our project? Mac<extra_id_14> or <extra_id_15> day Sloane: next monday :) Macy: oh shit, i better start working faster Veronica: yeah, monday - please make sure you have your part ready Veronica: Monica will be mad if we don<extra_id_16> it on time</s> Francesca: girls, I<extra_id_17> your advice Blake: what's up? Vivienne: yes? Francesca<extra_id_18> wants us to go on a dancing course<extra_id_19>'s not that i don't like it but I'm stressed<extra_id_20> can't dance,<extra_id_21> can barely walk :/ Blake: you know, the courses are to go and<extra_id_22> them...s<extra_id_23> you're a perfect candidate to try it Vivienne: that'<extra_id_24>, nobody who can dance would pay to go on a dancing course Francesca: I get it, but I'm so im<extra_id_25> doesn't work out as I want :/ Francesca: and Brian is 100% sure and is<extra_id_26> to stop freaking out Blake: oh come on, maybe it'<extra_id_27> something you will love? you won't know until you try Blake: sometimes you just have<extra_id_28> the deep end and see<extra_id_29> happens Vivienne: Blake's right Vivienne: c'mon Francesca: I'll reconsider it...maybe <extra_id_30> the parties where you can be the couple of the night Blake: exactly!!!!! Blake:<extra_id_31> re scared then Viv and I<extra_id_32> a couple<extra_id_33> hahaha<extra_id_34> this is absolutely fantastic, I'm in XDDDD Francesca:<extra_id_35> mg, really? xddd Blake: why not Blake: Viv, will you<extra_id_36> gf during that course? xd Viv: of course DARLING hahahahahaah Francesca: I can't believe it X<extra_id_37> already see myself introducing<extra_id_38> as the lesbian couple<extra_id_39> </s> Elizabeth: How about the cathedral? Kathleen: Eh probably there’<extra_id_40> <extra_id_41> tower... Elizabeth: Yes, there<extra_id_42> ;] Kathleen: No way, I’m not climbing some stupid stairs Elizabeth: You<extra_id_43>, it’ll not take long... Kathleen: Great, standing there alone<extra_id_44> organization! Elizabeth: How on earth am I<extra_id_45> are against anything I come up with!! Kathleen: Maybe you just have bad ideas ;/ Elizabeth: The rest of the group is not complaining, only you Kathleen:<extra_id_46> you<extra_id_47> about it Elizabeth: Listen, I’m done, I<extra_id_48> ask you about anything, you’ll see the program in a few days and tell me if you want to go<extra_id_49> It’s even worse, you promised everyone will have a chance to express their opinions! Elizabeth: But<extra_id_50> </s>`
and
`print(tokenizer.batch_decode(labels, skip_special_tokens=False)[0])`
gives
`<extra_id_0> are you kidding me<extra_id_1> St<extra_id_2>! Angelica:<extra_id_3> 3 Stacy: List<extra_id_4> acy:<extra_id_5> do<extra_id_6> ly Stacy:<extra_id_7> : the rings? Angel<extra_id_8> </s> West<extra_id_9> beasts of the southern wild<extra_id_10> :<extra_id_11> seems to be interesting Iza<extra_id_12> :<extra_id_13> zayah: Haha ok<extra_id_14> y: next monday<extra_id_15> thurs<extra_id_16> 't deliver<extra_id_17> need<extra_id_18> : Brian<extra_id_19>, it<extra_id_20> out...i<extra_id_21> I<extra_id_22> learn from<extra_id_23> o<extra_id_24> s right<extra_id_25> patient when something<extra_id_26> almost bullying me<extra_id_27> s<extra_id_28> to jump in<extra_id_29> what<extra_id_30> Vivienne: just think about all<extra_id_31> If you'<extra_id_32> can go as<extra_id_33> on that course ha<extra_id_34> Vivienne: Blake....<extra_id_35> o<extra_id_36> be my <extra_id_37> DDDD I<extra_id_38> you to my boyfriend<extra_id_39> I know and respect XD<extra_id_40> s<extra_id_41> a<extra_id_42> is <extra_id_43> can wait outside<extra_id_44>, nice<extra_id_45> supposed to organize anything when you<extra_id_46> Maybe<extra_id_47> just don’t know<extra_id_48> will not<extra_id_49> or not Kathleen:<extra_id_50> I</s>`
What am I missing here? As there any helper script to run data preprocessing ad-hoc?
Thank you!
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Occurs while running
from transformers import T5Tokenizer, T5ForConditionalGeneration
import torch
from datasets import load_dataset
from itertools import chain
from dataclasses import dataclass
from transformers import (
BatchEncoding,
PreTrainedTokenizerBase
)
from typing import Dict, List, Optional
import numpy as np
from tqdm imprt tqdm
tokenizer = T5Tokenizer.from_pretrained("t5-small")
model = T5ForConditionalGeneration.from_pretrained("t5-small")
# Load dataset from the hub
datasets = load_dataset("samsum")
print(f"Train dataset size: {len(dataset['train'])}")
print(f"Test dataset size: {len(dataset['test'])}")
def tokenize_function(examples):
return tokenizer(examples["dialogue"], return_attention_mask=False)
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
num_proc=4,
load_from_cache_file=False,
)
def compute_input_and_target_lengths(inputs_length, noise_density=0.15, mean_noise_span_length=1.0):
"""This function is copy of `random_spans_helper <https://github.com/google-research/text-to-text-transfer-transformer/blob/84f8bcc14b5f2c03de51bd3587609ba8f6bbd1cd/t5/data/preprocessors.py#L2466>`__ .
Training parameters to avoid padding with random_spans_noise_mask.
When training a model with random_spans_noise_mask, we would like to set the other
training hyperparmeters in a way that avoids padding.
This function helps us compute these hyperparameters.
We assume that each noise span in the input is replaced by extra_tokens_per_span_inputs sentinel tokens,
and each non-noise span in the targets is replaced by extra_tokens_per_span_targets sentinel tokens.
This function tells us the required number of tokens in the raw example (for split_tokens())
as well as the length of the encoded targets. Note that this function assumes
the inputs and targets will have EOS appended and includes that in the reported length.
Args:
inputs_length: an integer - desired length of the tokenized inputs sequence
noise_density: a float
mean_noise_span_length: a float
Returns:
tokens_length: length of original text in tokens
targets_length: an integer - length in tokens of encoded targets sequence
"""
def _tokens_length_to_inputs_length_targets_length(tokens_length):
num_noise_tokens = int(round(tokens_length * noise_density))
num_nonnoise_tokens = tokens_length - num_noise_tokens
num_noise_spans = int(round(num_noise_tokens / mean_noise_span_length))
# inputs contain all nonnoise tokens, sentinels for all noise spans
# and one EOS token.
_input_length = num_nonnoise_tokens + num_noise_spans + 1
_output_length = num_noise_tokens + num_noise_spans + 1
return _input_length, _output_length
tokens_length = inputs_length
while _tokens_length_to_inputs_length_targets_length(tokens_length + 1)[0] <= inputs_length:
tokens_length += 1
inputs_length, targets_length = _tokens_length_to_inputs_length_targets_length(tokens_length)
# minor hack to get the targets length to be equal to inputs length
# which is more likely to have been set to a nice round number.
if noise_density == 0.5 and targets_length > inputs_length:
tokens_length -= 1
targets_length -= 1
return tokens_length, targets_length
expanded_inputs_length, targets_length = compute_input_and_target_lengths(
inputs_length=1024,
noise_density=0.15,
mean_noise_span_length=1.0,
)
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
# customize this part to your needs.
if total_length >= expanded_inputs_length:
total_length = (total_length // expanded_inputs_length) * expanded_inputs_length
# Split by chunks of max_len.
result = {
k: [t[i : i + expanded_inputs_length] for i in range(0, total_length, expanded_inputs_length)]
for k, t in concatenated_examples.items()
}
return result
tokenized_datasets = tokenized_datasets.map(
group_texts,
batched=True,
num_proc=4,
load_from_cache_file=False,
)
@dataclass
class FlaxDataCollatorForT5MLM:
"""
Data collator used for T5 span-masked language modeling.
It is made sure that after masking the inputs are of length `data_args.max_seq_length` and targets are also of fixed length.
For more information on how T5 span-masked language modeling works, one can take a look
at the `official paper <https://arxiv.org/pdf/1910.10683.pdf>`__
or the `official code for preprocessing <https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/data/preprocessors.py>`__ .
Args:
tokenizer (:class:`~transformers.PreTrainedTokenizer` or :class:`~transformers.PreTrainedTokenizerFast`):
The tokenizer used for encoding the data.
noise_density (:obj:`float`):
The probability with which to (randomly) mask tokens in the input.
mean_noise_span_length (:obj:`float`):
The average span length of the masked tokens.
input_length (:obj:`int`):
The expected input length after masking.
target_length (:obj:`int`):
The expected target length after masking.
pad_token_id: (:obj:`int`):
The pad token id of the model
decoder_start_token_id: (:obj:`int):
The decoder start token id of the model
"""
tokenizer: PreTrainedTokenizerBase
noise_density: float
mean_noise_span_length: float
input_length: int
target_length: int
pad_token_id: int
decoder_start_token_id: int
def __call__(self, examples: List[Dict[str, np.ndarray]]) -> BatchEncoding:
# convert list to dict and tensorize input
batch = BatchEncoding(
{k: np.array([examples[i][k] for i in range(len(examples))]) for k, v in examples[0].items()}
)
input_ids = batch["input_ids"]
batch_size, expandend_input_length = input_ids.shape
mask_indices = np.asarray([self.random_spans_noise_mask(expandend_input_length) for i in range(batch_size)])
labels_mask = ~mask_indices
input_ids_sentinel = self.create_sentinel_ids(mask_indices.astype(np.int8))
labels_sentinel = self.create_sentinel_ids(labels_mask.astype(np.int8))
batch["input_ids"] = self.filter_input_ids(input_ids, input_ids_sentinel)
batch["labels"] = self.filter_input_ids(input_ids, labels_sentinel)
print(">>>>> sanity check <<<<<<<<<<")
print("\n")
print(">>>>> inputs <<<<<<<<<<")
print(self.tokenizer.batch_decode(batch["input_ids"])[0])
print("\n")
print(">>>>> masks <<<<<<<<<<")
print(self.tokenizer.batch_decode(batch["labels"])[0])
print("\n")
print(">>>>> snity check <<<<<<<<<<")
if batch["input_ids"].shape[-1] != self.input_length:
raise ValueError(
f"`input_ids` are incorrectly preprocessed. `input_ids` length is {batch['input_ids'].shape[-1]}, but"
f" should be {self.input_length}."
)
if batch["labels"].shape[-1] != self.target_length:
raise ValueError(
f"`labels` are incorrectly preprocessed. `labels` length is {batch['labels'].shape[-1]}, but should be"
f" {self.target_length}."
)
# to check that tokens are correctly preprocessed, one can run `self.tokenizer.batch_decode(input_ids)` and `self.tokenizer.batch_decode(labels)` here...
return batch
def create_sentinel_ids(self, mask_indices):
"""
Sentinel ids creation given the indices that should be masked.
The start indices of each mask are replaced by the sentinel ids in increasing
order. Consecutive mask indices to be deleted are replaced with `-1`.
"""
start_indices = mask_indices - np.roll(mask_indices, 1, axis=-1) * mask_indices
start_indices[:, 0] = mask_indices[:, 0]
sentinel_ids = np.where(start_indices != 0, np.cumsum(start_indices, axis=-1), start_indices)
sentinel_ids = np.where(sentinel_ids != 0, (len(self.tokenizer) - sentinel_ids), 0)
sentinel_ids -= mask_indices - start_indices
return sentinel_ids
def filter_input_ids(self, input_ids, sentinel_ids):
"""
Puts sentinel mask on `input_ids` and fuse consecutive mask tokens into a single mask token by deleting.
This will reduce the sequence length from `expanded_inputs_length` to `input_length`.
"""
batch_size = input_ids.shape[0]
input_ids_full = np.where(sentinel_ids != 0, sentinel_ids, input_ids)
# input_ids tokens and sentinel tokens are >= 0, tokens < 0 are
# masked tokens coming after sentinel tokens and should be removed
input_ids = input_ids_full[input_ids_full >= 0].reshape((batch_size, -1))
input_ids = np.concatenate(
[input_ids, np.full((batch_size, 1), self.tokenizer.eos_token_id, dtype=np.int32)], axis=-1
)
return input_ids
def random_spans_noise_mask(self, length):
"""This function is copy of `random_spans_helper <https://github.com/google-research/text-to-text-transfer-transformer/blob/84f8bcc14b5f2c03de51bd3587609ba8f6bbd1cd/t5/data/preprocessors.py#L2682>`__ .
Noise mask consisting of random spans of noise tokens.
The number of noise tokens and the number of noise spans and non-noise spans
are determined deterministically as follows:
num_noise_tokens = round(length * noise_density)
num_nonnoise_spans = num_noise_spans = round(num_noise_tokens / mean_noise_span_length)
Spans alternate between non-noise and noise, beginning with non-noise.
Subject to the above restrictions, all masks are equally likely.
Args:
length: an int32 scalar (length of the incoming token sequence)
noise_density: a float - approximate density of output mask
mean_noise_span_length: a number
Returns:
a boolean tensor with shape [length]
"""
orig_length = length
num_noise_tokens = int(np.round(length * self.noise_density))
num_nonnoise_tokens = length - num_noise_tokens
# avoid degeneracy by ensuring positive numbers of noise and nonnoise tokens.
num_noise_tokens = min(max(num_noise_tokens, 1), length - 1)
# num_noise_tokens should be less than num_noise_tokens and num_nonnoise_tokens
num_noise_spans = int(np.round(min(num_noise_tokens, num_nonnoise_tokens) / self.mean_noise_span_length))
# avoid degeneracy by ensuring positive number of noise spans
num_noise_spans = max(num_noise_spans, 1)
# pick the lengths of the noise spans and the non-noise spans
def _random_segmentation(num_items, num_segments):
"""Partition a sequence of items randomly into non-empty segments.
Args:
num_items: an integer scalar > 0
num_segments: an integer scalar in [1, num_items]
Returns:
a Tensor with shape [num_segments] containing positive integers that add
up to num_items
"""
mask_indices = np.arange(num_items - 1) < (num_segments - 1)
np.random.shuffle(mask_indices)
first_in_segment = np.pad(mask_indices, [[1, 0]])
segment_id = np.cumsum(first_in_segment)
# count length of sub segments assuming that list is sorted
_, segment_length = np.unique(segment_id, return_counts=True)
return segment_length
noise_span_lengths = _random_segmentation(num_noise_tokens, num_noise_spans)
nonnoise_span_lengths = _random_segmentation(num_nonnoise_tokens, num_noise_spans)
interleaved_span_lengths = np.reshape(
np.stack([nonnoise_span_lengths, noise_span_lengths], axis=1), [num_noise_spans * 2]
)
span_starts = np.cumsum(interleaved_span_lengths)[:-1]
span_start_indicator = np.zeros((length,), dtype=np.int8)
span_start_indicator[span_starts] = True
span_num = np.cumsum(span_start_indicator)
is_noise = np.equal(span_num % 2, 1)
return is_noise[:orig_length]
data_collator = FlaxDataCollatorForT5MLM(
tokenizer=tokenizer,
noise_density=0.15,
mean_noise_span_length=3.0,
input_length=1024,
target_length=309,
pad_token_id=0,
decoder_start_token_id=0,
)
def generate_batch_splits(samples_idx: np.ndarray, batch_size: int, drop_last=True) -> np.ndarray:
"""Generate batches of data for a specified batch size from sample indices. If the dataset size is not divisible by
the batch size and `drop_last` is `True`, the last incomplete batch is dropped. Else, it is returned."""
num_samples = len(samples_idx)
if drop_last:
samples_to_remove = num_samples % batch_size
if samples_to_remove != 0:
samples_idx = samples_idx[:-samples_to_remove]
sections_split = num_samples // batch_size
samples_idx = samples_idx.reshape((sections_split, batch_size))
else:
sections_split = math.ceil(num_samples / batch_size)
samples_idx = np.array_split(samples_idx, sections_split)
return samples_idx
# Generate an epoch by shuffling sampling indices from the train dataset
num_train_samples = len(tokenized_datasets["train"])
# Avoid using jax.numpy here in case of TPU training
train_samples_idx = np.random.permutation(np.arange(num_train_samples))
train_batch_idx = generate_batch_splits(train_samples_idx, 4)
for step, batch_idx in enumerate(tqdm(train_batch_idx, desc="Training...", position=1)):
samples = [tokenized_datasets["train"][int(idx)] for idx in batch_idx]
model_inputs = data_collator(samples)
from `run_t5_mlm_flax.py`
### Expected behavior
I expected to run run_t5_mlm_flax.py script without an error
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23248/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23248/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23246
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23246/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23246/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23246/events
|
https://github.com/huggingface/transformers/pull/23246
| 1,702,638,643 |
PR_kwDOCUB6oc5QIEQC
| 23,246 |
Fix `from_config`
|
{
"login": "DyeKuu",
"id": 39208702,
"node_id": "MDQ6VXNlcjM5MjA4NzAy",
"avatar_url": "https://avatars.githubusercontent.com/u/39208702?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DyeKuu",
"html_url": "https://github.com/DyeKuu",
"followers_url": "https://api.github.com/users/DyeKuu/followers",
"following_url": "https://api.github.com/users/DyeKuu/following{/other_user}",
"gists_url": "https://api.github.com/users/DyeKuu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DyeKuu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DyeKuu/subscriptions",
"organizations_url": "https://api.github.com/users/DyeKuu/orgs",
"repos_url": "https://api.github.com/users/DyeKuu/repos",
"events_url": "https://api.github.com/users/DyeKuu/events{/privacy}",
"received_events_url": "https://api.github.com/users/DyeKuu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,683 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
# What does this PR do?
Resolves https://github.com/huggingface/transformers/issues/23241
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23246/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23246/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23246",
"html_url": "https://github.com/huggingface/transformers/pull/23246",
"diff_url": "https://github.com/huggingface/transformers/pull/23246.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23246.patch",
"merged_at": 1683665920000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23245
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23245/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23245/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23245/events
|
https://github.com/huggingface/transformers/pull/23245
| 1,702,635,244 |
PR_kwDOCUB6oc5QIDhw
| 23,245 |
Revert "[Doctests] Refactor doctests + add CI"
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23245). All of your documentation changes will be reflected on that endpoint."
] | 1,683 | 1,683 | 1,683 |
COLLABORATOR
| null |
Reverts huggingface/transformers#22987
This PR created a hard dependency on `pytest` which we don't want in Transformers. Looking a bit more it would be better if the whole `doctest_utils.py` module lived outside of the Transformers library, so it should be structured differently.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23245/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23245/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23245",
"html_url": "https://github.com/huggingface/transformers/pull/23245",
"diff_url": "https://github.com/huggingface/transformers/pull/23245.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23245.patch",
"merged_at": 1683660375000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23244
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23244/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23244/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23244/events
|
https://github.com/huggingface/transformers/pull/23244
| 1,702,605,475 |
PR_kwDOCUB6oc5QH9K7
| 23,244 |
Hot Fix
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"No need to have this PR anymore after #23245 and the decision made there."
] | 1,683 | 1,683 | 1,683 |
COLLABORATOR
| null |
# What does this PR do?
Fix the failing GitHub Action `Update Transformers metadata` due to the missing `pytest` after PR #22987. But it's kind strange that a simple `from transformers.utils import direct_transformers_import` will need `pytest`. Maybe we need to re-think if to have `from .doctest_utils import HfDocTestParser` inside the file `transformers/utils/__init__.py`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23244/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23244/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23244",
"html_url": "https://github.com/huggingface/transformers/pull/23244",
"diff_url": "https://github.com/huggingface/transformers/pull/23244.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23244.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23243
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23243/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23243/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23243/events
|
https://github.com/huggingface/transformers/pull/23243
| 1,702,584,480 |
PR_kwDOCUB6oc5QH4hW
| 23,243 |
CTC example: updated trainer parameters to save tokenizer
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks @MKhalusova!"
] | 1,683 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
The current example only passes `feature_extractor` to `Trainer` and thus `tokenizer` is not saved and won't be pushed to Hub. This PR fixes this by passing the `processor` to `Trainer`. It can probably be refactored further to get the tokenizer and feature_extractor from the instantiated processor, but with regard to behavior, this small fix seems to address the problem.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23243/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23243/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23243",
"html_url": "https://github.com/huggingface/transformers/pull/23243",
"diff_url": "https://github.com/huggingface/transformers/pull/23243.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23243.patch",
"merged_at": 1683719111000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23242
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23242/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23242/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23242/events
|
https://github.com/huggingface/transformers/pull/23242
| 1,702,576,690 |
PR_kwDOCUB6oc5QH2xN
| 23,242 |
CTC example: updated trainer parameters to save tokenizer
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Sorry, wrong branch, will open a new PR",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23242). All of your documentation changes will be reflected on that endpoint."
] | 1,683 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
The current example only passes `feature_extractor` to `Trainer` and thus `tokenizer` is not saved and won't be pushed to Hub. This PR fixes this by passing the `processor` to `Trainer`. It can probably be refactored further to get the tokenizer and feature_extractor from the instantiated processor, but with regard to behavior, this small fix seems to address the problem.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23242/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23242/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23242",
"html_url": "https://github.com/huggingface/transformers/pull/23242",
"diff_url": "https://github.com/huggingface/transformers/pull/23242.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23242.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23241
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23241/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23241/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23241/events
|
https://github.com/huggingface/transformers/issues/23241
| 1,702,480,561 |
I_kwDOCUB6oc5lecqx
| 23,241 |
`from_config` errors for `bigcode/santacoder`
|
{
"login": "DyeKuu",
"id": 39208702,
"node_id": "MDQ6VXNlcjM5MjA4NzAy",
"avatar_url": "https://avatars.githubusercontent.com/u/39208702?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DyeKuu",
"html_url": "https://github.com/DyeKuu",
"followers_url": "https://api.github.com/users/DyeKuu/followers",
"following_url": "https://api.github.com/users/DyeKuu/following{/other_user}",
"gists_url": "https://api.github.com/users/DyeKuu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DyeKuu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DyeKuu/subscriptions",
"organizations_url": "https://api.github.com/users/DyeKuu/orgs",
"repos_url": "https://api.github.com/users/DyeKuu/repos",
"events_url": "https://api.github.com/users/DyeKuu/events{/privacy}",
"received_events_url": "https://api.github.com/users/DyeKuu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Not an expert but I feel like we could do something here https://github.com/huggingface/transformers/blob/main/src/transformers/models/auto/auto_factory.py#L410-L411\r\n\r\n```diff\r\n class_ref = config.auto_map[cls.__name__]\r\n if \"--\" in class_ref:\r\n repo_id, class_ref = class_ref.split(\"--\")\r\n else:\r\n repo_id = config.name_or_path\r\n- module_file, class_name = class_ref.split(\".\")\r\n- model_class = get_class_from_dynamic_module(repo_id, module_file + \".py\", class_name, **kwargs)\r\n+ model_class = get_class_from_dynamic_module(class_ref, repo_id, **kwargs)\r\n```",
"Sounds like the right fix if you want to make a quick PR!"
] | 1,683 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
### System Info
transformers commit (current main branch): c34a525d2
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoConfig, AutoModelForCausalLM
model_name = "bigcode/santacoder"
config = AutoConfig.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
```
gives error:
```
Exception has occurred: ValueError
not enough values to unpack (expected 2, got 1)
File "/fsx/kunhao/transformers/src/transformers/dynamic_module_utils.py", line 408, in get_class_from_dynamic_module
module_file, class_name = class_reference.split(".")
File "/fsx/kunhao/transformers/src/transformers/models/auto/auto_factory.py", line 411, in from_config
model_class = get_class_from_dynamic_module(repo_id, module_file + ".py", class_name, **kwargs)
ValueError: not enough values to unpack (expected 2, got 1)
```
However, directly calling `model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)` works fine.
### Expected behavior
No error happens
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23241/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23241/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23240
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23240/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23240/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23240/events
|
https://github.com/huggingface/transformers/issues/23240
| 1,702,454,132 |
I_kwDOCUB6oc5leWN0
| 23,240 |
[New model] ImageBind: One Embedding Space To Bind Them All
|
{
"login": "xenova",
"id": 26504141,
"node_id": "MDQ6VXNlcjI2NTA0MTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xenova",
"html_url": "https://github.com/xenova",
"followers_url": "https://api.github.com/users/xenova/followers",
"following_url": "https://api.github.com/users/xenova/following{/other_user}",
"gists_url": "https://api.github.com/users/xenova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xenova/subscriptions",
"organizations_url": "https://api.github.com/users/xenova/orgs",
"repos_url": "https://api.github.com/users/xenova/repos",
"events_url": "https://api.github.com/users/xenova/events{/privacy}",
"received_events_url": "https://api.github.com/users/xenova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false | null |
[] |
[
"Hi @xenova , I would like to work on implementing this model.",
"> Hi @xenova , I would like to work on implementing this model.\n\nSweet!",
"Hi, since it looks like the PR for this model (#23284) has been closed, I would be interested in working on a new PR to implement the ImageBind model :)",
"I have opened a new PR to implement the ImageBind model: #26310."
] | 1,683 | 1,695 | null |
CONTRIBUTOR
| null |
### Model description
As stated in their [blog post](https://ai.facebook.com/blog/imagebind-six-modalities-binding-ai/),
> "[ImageBind is] the first AI model capable of binding information from six modalities. The [model](https://github.com/facebookresearch/ImageBind) learns a single embedding, or shared representation space, not just for text, image/video, and audio, but also for sensors that record depth (3D), thermal (infrared radiation), and inertial measurement units (IMU), which calculate motion and position."
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
GitHub repo: https://github.com/facebookresearch/ImageBind
Paper: https://facebookresearch.github.io/ImageBind/paper
Blog: https://ai.facebook.com/blog/imagebind-six-modalities-binding-ai/
Demo: https://imagebind.metademolab.com/
Video: https://dl.fbaipublicfiles.com/imagebind/imagebind_video.mp4
Weights: https://dl.fbaipublicfiles.com/imagebind/imagebind_huge.pth (currently only 1 that I can see)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23240/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23240/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/23239
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23239/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23239/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23239/events
|
https://github.com/huggingface/transformers/pull/23239
| 1,702,430,904 |
PR_kwDOCUB6oc5QHWoy
| 23,239 |
[docs] Audio task guides fixes
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,683 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
Related to https://github.com/huggingface/transformers/issues/23188 and https://github.com/huggingface/transformers/issues/23222
In the guide examples, only `feature_extractor` is passed to `Trainer`, so that's the only part of the processor that gets pushed to Hub. This PR fixes the docs to pass `processor` to Trainer as the `tokenizer` parameter, so both `feature_extractor` and `tokenizer` are saved.
The behavior is confirmed with the ASR task guide example. We may also need to fix the example scripts. I'll look into it, and if a fix is needed, I'll create a separate PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23239/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23239/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23239",
"html_url": "https://github.com/huggingface/transformers/pull/23239",
"diff_url": "https://github.com/huggingface/transformers/pull/23239.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23239.patch",
"merged_at": 1683719134000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23238
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23238/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23238/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23238/events
|
https://github.com/huggingface/transformers/issues/23238
| 1,702,425,196 |
I_kwDOCUB6oc5lePJs
| 23,238 |
[Bug] Failiure to generate Diffusion images / AI language responses when upgrading past 4.19.2
|
{
"login": "BlackWyvern",
"id": 7834910,
"node_id": "MDQ6VXNlcjc4MzQ5MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7834910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BlackWyvern",
"html_url": "https://github.com/BlackWyvern",
"followers_url": "https://api.github.com/users/BlackWyvern/followers",
"following_url": "https://api.github.com/users/BlackWyvern/following{/other_user}",
"gists_url": "https://api.github.com/users/BlackWyvern/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BlackWyvern/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BlackWyvern/subscriptions",
"organizations_url": "https://api.github.com/users/BlackWyvern/orgs",
"repos_url": "https://api.github.com/users/BlackWyvern/repos",
"events_url": "https://api.github.com/users/BlackWyvern/events{/privacy}",
"received_events_url": "https://api.github.com/users/BlackWyvern/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,683 | 1,693 | 1,693 |
NONE
| null |
### System Info
I've posted this to both Auto1x4 and Opinionated, but I don't think it's an issue on their end. So here I am.
For some reason, I am completely unable to generate images or use ai language models if I upgrade my transformers past 4.19.2.
This ticket details my enite [install/diagnostic workflow](https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/7351) and other examples of the issue.
In a nut-shell, I can git-pull any of my AI based generators, and they will all exhibit this issue until I change requirements.txt to transformers = 4.19.2
My Auto1x4 & Opinionated Environ and prompts are as follow:
1.5 ema-only safetensor from [runwayml](https://huggingface.co/runwayml/stable-diffusion-v1-5/tree/main)
On vae-ft-mse-840000-ema-pruned.ckpt from [stabilityai](https://huggingface.co/stabilityai/sd-vae-ft-mse-original/tree/main)
ETA weight: 31337, CFG: 7
30 Step Euler A
"3d rendering of a small metal cube sitting on a glass table"
4.19.2

4.25.1

4.26.1

4.28.1

The strangest thing is that after 4.19.2 all versions are wrong, but they're all CONSISTENTLY wrong. I don't really know where else to turn.
```
- `transformers` version: 4.29.0.dev0
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.10.6
- Huggingface_hub version: 0.13.3
- Safetensors version: 0.3.0
- PyTorch version (GPU?): 2.0.0+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Presumably
- Using distributed or parallel set-up in script?: No idea
```
Launch details from Opinionated
```
13:48:03-071493 INFO Starting SD.Next
13:48:03-092427 INFO Python 3.10.6 on Windows
13:48:03-350737 INFO Version: f6898c9a Fri May 5 13:40:53 2023 -0400
13:48:03-741693 INFO Setting environment tuning
13:48:03-745682 INFO nVidia CUDA toolkit detected
13:48:05-379334 INFO Torch 2.0.0+cu118
13:48:05-397286 INFO Torch backend: nVidia CUDA 11.8 cuDNN 8700
13:48:05-400261 INFO Torch detected GPU: NVIDIA GeForce RTX 3080 VRAM 10240 Arch (8, 6) Cores 68
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Personally reproducible by changing requirements.txt in any project to any version of transformers higher than 4.19.2
I have not gotten confirmation of anyone else having or being able to reproduce this issue.
### Expected behavior
Parity or near parity of model generation.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23238/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23238/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23237
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23237/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23237/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23237/events
|
https://github.com/huggingface/transformers/issues/23237
| 1,702,265,869 |
I_kwDOCUB6oc5ldoQN
| 23,237 |
Cannot Convert CLIP to TensorRT
|
{
"login": "junwang-wish",
"id": 112650299,
"node_id": "U_kgDOBrboOw",
"avatar_url": "https://avatars.githubusercontent.com/u/112650299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/junwang-wish",
"html_url": "https://github.com/junwang-wish",
"followers_url": "https://api.github.com/users/junwang-wish/followers",
"following_url": "https://api.github.com/users/junwang-wish/following{/other_user}",
"gists_url": "https://api.github.com/users/junwang-wish/gists{/gist_id}",
"starred_url": "https://api.github.com/users/junwang-wish/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/junwang-wish/subscriptions",
"organizations_url": "https://api.github.com/users/junwang-wish/orgs",
"repos_url": "https://api.github.com/users/junwang-wish/repos",
"events_url": "https://api.github.com/users/junwang-wish/events{/privacy}",
"received_events_url": "https://api.github.com/users/junwang-wish/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"When it comes to production and deployment, you should use TensorFlow. This repo already supports TFCLIPModel and Triton Inference Server supports Tensorflow as well. I was able to convert some TF models in this repo into TensorRT without any bugs (including CLIP), and the success rate is 100%. For CLIP model, I recommend you use TF or `torch_tensorrt` (https://github.com/pytorch/TensorRT) to convert the model rather than ONNX path.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,683 | 1,687 | 1,687 |
NONE
| null |
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-4.14.301-224.520.amzn2.x86_64-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.14.1
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@amyeroberts
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
This is my code for exporting CLIP image encoder ( openai/clip-vit-large-patch14-336 ) as ONNX
```
import os
from transformers import AutoConfig, AutoProcessor, CLIPModel, CLIPVisionModel
from transformers.modeling_outputs import BaseModelOutputWithPooling
from pathlib import Path
from transformers.onnx import export
from typing import Mapping, OrderedDict
from transformers.onnx import OnnxConfig, validate_model_outputs
from functools import partial
import torch
class CLIPImageEncoder(CLIPModel):
def forward(self,
pixel_values: torch.FloatTensor
):
outputs = self.get_image_features(
pixel_values=pixel_values
)
return BaseModelOutputWithPooling(
pooler_output=outputs.reshape(-1, 768)
)
class EncoderOnnxConfig(OnnxConfig):
@property
def inputs(self) -> Mapping[str, Mapping[int, str]]:
return OrderedDict(
[
("pixel_values", {0: "batch", 1: "num_channels", 2: "height", 3: "width"})
]
)
@property
def outputs(self) -> Mapping[str, Mapping[int, str]]:
return OrderedDict(
[
("pooler_output", {0: "batch", 1: "dim"})
]
)
config = AutoConfig.from_pretrained("openai/clip-vit-large-patch14-336")
onnx_config = EncoderOnnxConfig(config)
model = CLIPImageEncoder.from_pretrained("openai/clip-vit-large-patch14-336")
del model.text_model
del model.text_projection
processor = AutoProcessor.from_pretrained("openai/clip-vit-large-patch14-336")
onnx_path = Path("tmp_onnx/model.onnx")
onnx_inputs, onnx_outputs = export(processor.image_processor, model, onnx_config, onnx_config.default_onnx_opset, onnx_path)
validate_model_outputs(
onnx_config, processor.image_processor, model, onnx_path, onnx_outputs, 1e-4
)
```
It successfully outputs `model.onnx`, I then try to convert it to TensorRT within `triton inference server` by adding the following to `config.pbtxt`
```
optimization {
graph { level: 3 }
execution_accelerators {
gpu_execution_accelerator : [ {
name : "tensorrt"
parameters { key: "precision_mode" value: "FP16" }
parameters { key: "max_workspace_size_bytes" value: "1073741824" }
}]
}
}
```
but it outputs the following error
```
2023-05-09 15:12:36.763934678 [E:onnxruntime:log, tensorrt_execution_provider.h:58 log] [2023-05-09 15:12:36 ERROR] 10: [optimizer.cpp::computeCosts::3728] Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[onnx::MatMul_3130 + (Unnamed Layer* 135) [Shuffle].../visual_projection/MatMul]}.)
Segmentation fault (core dumped)
```
Given the large number of users of CLIP, Huggingface already made ONNX conversion step really smooth, if the exported ONNX can also be easily converted to TensorRT, that'd add a lot of value.
I wonder if the error is due to specific implementation of CLIP in Huggingface repo? Like use of one operator instead of another although the outcome is the same.
### Expected behavior
I follow the same ONNX conversion script for many other models such as MiniLM, T5, DistilBert, and the resulting ONNX can be easily converted to TensorRT inside Triton Inference Server. This is not the case for CLIP (ViT) model. Ideally, all ONNXs exported by Huggingface can be easily converted to TensorRT inside Triton Inference Server.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23237/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23237/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23236
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23236/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23236/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23236/events
|
https://github.com/huggingface/transformers/pull/23236
| 1,702,245,386 |
PR_kwDOCUB6oc5QGuug
| 23,236 |
accelerate deepspeed and gradient accumulation integrate
|
{
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks for working on this. Is the diff longer than expected because of other PRs to be merged before?\r\n\r\nDue to updating from main, it is not showing the diff wrt previous branches. Weird. \r\n\r\n> Might be cool to Have Stas have a look (not pinging him here too early) once this is ready to merge and tests are confirmed to all pass.\r\n\r\nYes, definitely. All tests are passing already. Checked the slow tests offline.\r\n",
"@sgugger, now the diff is only specific to DeepSpeed changes + gradient accumulation changes + saving/loading changes wrt previous PR.",
"Hello @stas00, please review this PR which aims to shift the accelerate handling in Trainer to Accelerate. Thank you!"
] | 1,683 | 1,685 | 1,685 |
CONTRIBUTOR
| null |
### What does this PR do?
1. Shift deepspeed integration to accelerate
2. Shift Gradient Accumulation to Accelerate
3. Merge after #23168
4. no user facing change. Now user can use `accelerate launch` with trainer for DeepSpeed, e.g.:
```
accelerate launch --num_processes=2 --mixed_precision=bf16 --use_deepspeed --gradient_accumulation_steps=1 --gradient_clipping=1 --zero3_init_flag=True --zero3_save_16bit_model=False --zero_stage=3 --offload_optimizer_device=none --offload_param_device=none ./examples/pytorch/text-classification/run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 16 --learning_rate 5e-5 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --bf16
```
Usual run using `torchrun` and trainer args is unimpacted:
```
torchrun --nnodes 1 --nproc-per-node 2 ./examples/pytorch/text-classification/run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 16 --learning_rate 5e-5 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --deepspeed ~/transformers/tests/deepspeed/ds_config_zero2.json
```
5. Save and load utils are changed accordingly
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23236/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23236/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23236",
"html_url": "https://github.com/huggingface/transformers/pull/23236",
"diff_url": "https://github.com/huggingface/transformers/pull/23236.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23236.patch",
"merged_at": 1685526383000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23235
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23235/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23235/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23235/events
|
https://github.com/huggingface/transformers/pull/23235
| 1,702,226,752 |
PR_kwDOCUB6oc5QGqlY
| 23,235 |
Support ratios for `logging_steps`, `eval_steps`, and `save_steps`
|
{
"login": "konstantinjdobler",
"id": 28780372,
"node_id": "MDQ6VXNlcjI4NzgwMzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/28780372?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/konstantinjdobler",
"html_url": "https://github.com/konstantinjdobler",
"followers_url": "https://api.github.com/users/konstantinjdobler/followers",
"following_url": "https://api.github.com/users/konstantinjdobler/following{/other_user}",
"gists_url": "https://api.github.com/users/konstantinjdobler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/konstantinjdobler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/konstantinjdobler/subscriptions",
"organizations_url": "https://api.github.com/users/konstantinjdobler/orgs",
"repos_url": "https://api.github.com/users/konstantinjdobler/repos",
"events_url": "https://api.github.com/users/konstantinjdobler/events{/privacy}",
"received_events_url": "https://api.github.com/users/konstantinjdobler/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,683 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #23171
Adds support for *ratios* to the `logging_steps`, `eval_steps`, and `save_steps` arguments, i.e. if they are a float in range `[0,1)`, the steps are calculated as a ratio of total training steps.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23235/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23235/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23235",
"html_url": "https://github.com/huggingface/transformers/pull/23235",
"diff_url": "https://github.com/huggingface/transformers/pull/23235.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23235.patch",
"merged_at": 1683651914000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23234
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23234/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23234/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23234/events
|
https://github.com/huggingface/transformers/pull/23234
| 1,702,169,357 |
PR_kwDOCUB6oc5QGeK6
| 23,234 |
Overhaul TF serving signatures + dummy inputs
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This should be ready to review now - it shouldn't affect any existing models, as all our existing models override `serving`, `serving_output` and `dummy_inputs`. However, it should hopefully be a default that \"just works\" for a lot of future models, and means we can stop specifying the same information in three different places.",
"@gante Agreed! I can make that cleanup part of this PR, WDYT?",
"@Rocketknight1 Sounds good!",
"@Rocketknight1 plz ping when this PR is ready for a re-review 🔥 ",
"@gante @amyeroberts This should be ready for re-review now, but no rush because it's late on a Friday! The core idea is that I collapsed all of the redundant information we had down to a single source of truth. That source of truth is a new property on our models: `input_signature`. **All** serving methods and decorators have been removed from our models - serving on all models is now set in the `__init__` with \r\n`self.serving = tf.function(self.eager_serving, input_signature=[self.input_signature]`.\r\n\r\nThis fixes a major problem we had: As well as being a huge source of repetitive boilerplate, the serving signatures were incorrect in several places, and because they were compiled with a decorator, the decorator could not access `self.config`, which meant the serving signature could not include shape constraints that are defined in the config (such as `config.image_size`). This meant we just used `None` dimensions for dimensions that were actually not variable!\r\n\r\nAdditionally, `dummy_inputs` is now inferred from `self.input_signature` as well. Specifically, the `dummy_inputs` property fills in `None` dimensions in the `input_signature` with `2` and then just generates tensors with that shape and dtype, then builds the model with those.\r\n\r\n`dummy_inputs` can still be overridden, and this is used in a few models when they need particular dummy inputs to build without errors. The vast majority of `dummy_inputs` have been removed, though. `serving` can in theory be overridden too, but there was no need to do this in any of our models.\r\n\r\nFinally, the new base `serving_output` code covers most cases, and I'd estimate about 75% of `serving_outputs` in the codebase are gone. I expect there are going to be a few issues, but I'll keep an eye on the tests and make sure it's all okay!",
"It seems like there's a few issues caused by the default dummy input values triggering assertions or issues - I'll add dummy_inputs overrides to those models.",
"@gante @amyeroberts I think everything should pass now - ready for final review!",
"This PR reduces the size of our TF codebase (files matching `*_tf_*.py`) by a little under 5%, lol",
"I believe my question is related to this PR, but let me know if I should redirect this elsewhere! I'm trying to convert a pretrained model to a TFLite model but am running into issues due to its incorrect input_signature. If I understand correctly, this PR may have fixed that issue, but maybe only for future models? I'm using https://huggingface.co/microsoft/layoutlm-base-cased which was uploaded over 2 years ago.\r\n\r\nMy question - is there a way to override the pre-existing (incorrect) input_signature of a pretrained model that was presumably trained before this fix went in? I already tried reassigning but it errors due to it being a read-only property.\r\n\r\nReproducible code below:\r\nNote that `model.input_signature` includes `input_ids`, `attention_mask`, and `token_type_ids` only and that it's missing `bbox` (call definition [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/layoutlm/modeling_tf_layoutlm.py#L1433))\r\nI'm on transformers == 4.36.2\r\n```\r\nimport tensorflow as tf\r\nfrom transformers import AutoTokenizer, TFLayoutLMForTokenClassification\r\n\r\ndef get_sample_input():\r\n\r\n\ttokenizer = AutoTokenizer.from_pretrained(\"microsoft/layoutlm-base-uncased\")\r\n\r\n\twords = [\"Hello\", \"world\"]\r\n\tnormalized_word_boxes = [637, 773, 693, 782], [698, 773, 733, 782]\r\n\r\n\ttoken_boxes = []\r\n\tfor word, box in zip(words, normalized_word_boxes):\r\n\t word_tokens = tokenizer.tokenize(word)\r\n\t token_boxes.extend([box] * len(word_tokens))\r\n\r\n\t# add bounding boxes of cls + sep tokens\r\n\ttoken_boxes = [[0, 0, 0, 0]] + token_boxes + [[1000, 1000, 1000, 1000]]\r\n\r\n\tencoding = tokenizer(\" \".join(words), return_tensors=\"tf\")\r\n\tinput_ids = encoding[\"input_ids\"]\r\n\tbbox = tf.convert_to_tensor([token_boxes])\r\n\r\n\treturn input_ids, bbox\r\n\r\n\r\nmodel = TFLayoutLMForTokenClassification.from_pretrained(\"microsoft/layoutlm-base-uncased\")\r\n\r\n# has only input_ids, attention_mask, and token_type_ids\r\nprint(model.input_signature)\r\n\r\ninput_ids, bbox = get_sample_input()\r\n\r\n# note that the below works\r\noutputs = model(input_ids=input_ids, bbox=bbox)\r\n\r\n# now, convert to TFLite model\r\nconverter = tf.lite.TFLiteConverter.from_keras_model(model)\r\nconverter.optimizations = [tf.lite.Optimize.DEFAULT]\r\ntflite_model = converter.convert()\r\n\r\ninterpreter = tf.lite.Interpreter(model_content=tflite_model)\r\ninterpreter.allocate_tensors()\r\n\r\nmy_signature = interpreter.get_signature_runner()\r\n\r\n# below errors with \"ValueError: Invalid number of inputs provided for running a SignatureDef, expected 3 vs provided 2\"\r\n# because of the incorrect input_signature I believe\r\nmy_signature(input_ids=input_ids, bbox=bbox)\r\n```\r\n\r\n",
"Hi @echan5, the issue here is actually that our input signature for LayoutLM is missing `bbox`! This doesn't affect the standard model, but it does mean the export signature for TFLite is incorrect. I'll open a PR to fix it.\r\n\r\nIf you want to get your code working in the meantime, though, here's some background info:\r\n\r\nWhen you use `tf.lite.TFLiteConverter.from_keras_model`, what TFLite does internally is it first saves the model as a SavedModel export, then uses that as the base for the TFLite conversion (you can verify this in TF's [source code](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/python/lite.py#L1474))\r\n\r\nWhen a model is saved as SavedModel, you have to specify the **signatures** it will be saved with. Because a SavedModel is essentially a compiled version of the model, it doesn't have the same flexibility as the original - you have to specify the inputs it's going to receive and their dtypes.\r\n\r\nBy default, our models use the input names, shapes and dtypes specified in the model's `input_signature`. However, you can overrule that by passing your own `signatures`. You can see how we do this in the source code for `save_pretrained` [here](https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_tf_utils.py#L2404-L2423).\r\n\r\nSo, if you want to convert a model to TFLite with a custom signature, what I'd do is save the model as SavedModel with a custom signature, and then do the TFLite conversion from the saved model so you have control of that attribute.\r\n\r\nIf that seems a bit complex, though, hold tight - I'll try to have the PR in soon!\r\n\r\n",
"@echan5 quick update - the newer LayoutLMv3 has the correct signature already. I'll still fix the issue for LayoutLM v1, but you should be able to switch to `microsoft/layoutlmv3-base` to fix your issue immediately and get improved performance too!",
"PR is open at #28640",
"PR is merged - @echan5 if you want to continue using LayoutLMv1 instead of switching to v3, you can get the updated signature by installing from main with `pip install --upgrade git+https://github.com/huggingface/transformers.git`",
"@Rocketknight1 - thank you so much for the quick fix/PR! I do have to use v1 for now, unfortunately, so this has been helpful. \r\n\r\nA note in case this helps others looking at my original code snippet `my_signature(input_ids=input_ids, bbox=bbox)` will still error with \"ValueError: Invalid number of inputs provided for running a SignatureDef, expected 4 vs provided 2\" because all 4 args are expected (no default values; `model(input_ids=input_ids, bbox=bbox)` does handle default values).\r\n\r\nHowever, the the below will make it work (I'm essentially copy and pasting the code for setting default values for `attention_mask` and `token_type_ids` from the original layoutlm `call` definition [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/layoutlm/modeling_tf_layoutlm.py#L840-L853)):\r\n```\r\nfrom transformers.tf_utils import shape_list\r\ninput_shape = shape_list(input_ids)\r\n\r\n# set default values\r\nattention_mask = tf.fill(dims=input_shape, value=1)\r\ntoken_type_ids = tf.fill(dims=input_shape, value=0)\r\n\r\n# below now works\r\ntflite_outputs = my_signature(input_ids=input_ids, bbox=bbox, attention_mask=attention_mask, token_type_ids=token_type_ids)\r\n```\r\n\r\nI'm not sure if the team might want to consider handling default values too, or if that's a more complicated implementation, but this works for me for now - thank you for your help! (Also thank you for pointing out how to override the signatures in save_pretrained, as I had missed that)\r\n\r\nedit: link was pointing to the incorrect lines",
"Hi @echan5 - we can add default values in the Python code for the model, but not to the serving signature, unfortunately! We can exclude an input entirely, but in general we prefer to leave important inputs available in the default signature.\r\n\r\nYou can, however, do what you did there and just pass dummy values, or alternatively you can save the model as SavedModel with a custom signature that excludes those inputs, with something like this:\r\n\r\n```python\r\nsignature = model.input_signature\r\ndel signature['attention_mask'] # Remove/edit keys that you don't want\r\nserving_fn = model.serving.get_concrete_function(signature)\r\nmodel.save('my_save_dir', include_optimizer=False, signatures=serving_fn)\r\n\r\ntflite_model = tf.lite.TFLiteConverter.from_saved_model('my_save_dir')\r\n```",
"That's a much cleaner approach than the dummy values - thank you!"
] | 1,683 | 1,706 | 1,684 |
MEMBER
| null |
Right now, our default TF serving signature is only really appropriate for BERT-like models, which means it needs to be overridden in most cases. This PR does inspection of `self.call` to figure out what to actually use, but can still be overridden if required. It also moves the definition of the serving signature to the `__init__()`, which allows it to use values from the `config` to set parts of the shape (e.g. `num_channels`)
I might also explore doing something similar with `dummy_inputs` in this PR and build models via the serving signature, without needing to explicitly define `dummy_inputs`. Ideally, we could eliminate a lot of that boilerplate, which would make it much easier for users to contribute models and reduce the amount of work needed to turn a LLM translation from PyTorch into a working TF model.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23234/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23234/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23234",
"html_url": "https://github.com/huggingface/transformers/pull/23234",
"diff_url": "https://github.com/huggingface/transformers/pull/23234.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23234.patch",
"merged_at": 1684944205000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23233
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23233/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23233/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23233/events
|
https://github.com/huggingface/transformers/issues/23233
| 1,702,152,722 |
I_kwDOCUB6oc5ldMoS
| 23,233 |
404 Client Error: Not Found for url: https://huggingface.co/api/models/bert-large-cased
|
{
"login": "varungupta31",
"id": 51288316,
"node_id": "MDQ6VXNlcjUxMjg4MzE2",
"avatar_url": "https://avatars.githubusercontent.com/u/51288316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/varungupta31",
"html_url": "https://github.com/varungupta31",
"followers_url": "https://api.github.com/users/varungupta31/followers",
"following_url": "https://api.github.com/users/varungupta31/following{/other_user}",
"gists_url": "https://api.github.com/users/varungupta31/gists{/gist_id}",
"starred_url": "https://api.github.com/users/varungupta31/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/varungupta31/subscriptions",
"organizations_url": "https://api.github.com/users/varungupta31/orgs",
"repos_url": "https://api.github.com/users/varungupta31/repos",
"events_url": "https://api.github.com/users/varungupta31/events{/privacy}",
"received_events_url": "https://api.github.com/users/varungupta31/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Same here.\r\nIt seems to have something to do with [this](https://twitter.com/huggingface/status/1655760648926642178)",
"This is a duplicate of #23228 and #23229. The HuggingFace Hub is undergoing some problems, you can follow progress on resolution on the [HF status twitter](https://twitter.com/hf_status) account or the [status page](https://status.huggingface.co/)."
] | 1,683 | 1,683 | 1,683 |
NONE
| null |
### System Info
- `transformers` version: 4.9.1
- Platform: Linux-4.15.0-210-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.10
- PyTorch version (GPU?): 1.9.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker @younesbelkada @Narsil @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. `from transformers import BertTokenizer, BertModel`
2. `tokenizer = BertTokenizer.from_pretrained('bert-large-cased')`
As discussed [here](https://huggingface.co/bert-large-cased#:~:text=from%20transformers%20import%20BertTokenizer%2C%20BertModel%0Atokenizer%20%3D%20BertTokenizer.from_pretrained(%27bert%2Dlarge%2Dcased%27))
Leads to the following `HTTPError`
```
HTTPError Traceback (most recent call last)
<ipython-input-6-5c580443a1ad> in <module>
----> 1 tokenizer = BertTokenizer.from_pretrained('bert-large-cased')
~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1646 # At this point pretrained_model_name_or_path is either a directory or a model identifier name
1647 fast_tokenizer_file = get_fast_tokenizer_file(
-> 1648 pretrained_model_name_or_path, revision=revision, use_auth_token=use_auth_token
1649 )
1650 additional_files_names = {
~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in get_fast_tokenizer_file(path_or_repo, revision, use_auth_token)
3406 """
3407 # Inspect all files from the repo/folder.
-> 3408 all_files = get_list_of_files(path_or_repo, revision=revision, use_auth_token=use_auth_token)
3409 tokenizer_files_map = {}
3410 for file_name in all_files:
~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/transformers/file_utils.py in get_list_of_files(path_or_repo, revision, use_auth_token)
1685 token = None
1686 model_info = HfApi(endpoint=HUGGINGFACE_CO_RESOLVE_ENDPOINT).model_info(
-> 1687 path_or_repo, revision=revision, token=token
1688 )
1689 return [f.rfilename for f in model_info.siblings]
~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/huggingface_hub/hf_api.py in model_info(self, repo_id, revision, token)
246 )
247 r = requests.get(path, headers=headers)
--> 248 r.raise_for_status()
249 d = r.json()
250 return ModelInfo(**d)
~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/requests/models.py in raise_for_status(self)
951
952 if http_error_msg:
--> 953 raise HTTPError(http_error_msg, response=self)
954
955 def close(self):
HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/models/bert-large-cased
```
### Expected behavior
Should run without `HTTPError`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23233/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23233/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23232
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23232/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23232/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23232/events
|
https://github.com/huggingface/transformers/pull/23232
| 1,702,146,661 |
PR_kwDOCUB6oc5QGZWM
| 23,232 |
Add Japanese translation to accelerate.mdx
|
{
"login": "rustinwelter",
"id": 131769788,
"node_id": "U_kgDOB9qlvA",
"avatar_url": "https://avatars.githubusercontent.com/u/131769788?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rustinwelter",
"html_url": "https://github.com/rustinwelter",
"followers_url": "https://api.github.com/users/rustinwelter/followers",
"following_url": "https://api.github.com/users/rustinwelter/following{/other_user}",
"gists_url": "https://api.github.com/users/rustinwelter/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rustinwelter/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rustinwelter/subscriptions",
"organizations_url": "https://api.github.com/users/rustinwelter/orgs",
"repos_url": "https://api.github.com/users/rustinwelter/repos",
"events_url": "https://api.github.com/users/rustinwelter/events{/privacy}",
"received_events_url": "https://api.github.com/users/rustinwelter/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks a lot!"
] | 1,683 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
# What does this PR do?
Adds Japanese translation to accelerate.mdx
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #18413
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@omarespejel @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23232/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23232/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23232",
"html_url": "https://github.com/huggingface/transformers/pull/23232",
"diff_url": "https://github.com/huggingface/transformers/pull/23232.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23232.patch",
"merged_at": 1683643903000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23231
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23231/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23231/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23231/events
|
https://github.com/huggingface/transformers/issues/23231
| 1,702,011,003 |
I_kwDOCUB6oc5lcqB7
| 23,231 |
Whisper is inconsistent with returning last segment
|
{
"login": "hollance",
"id": 346853,
"node_id": "MDQ6VXNlcjM0Njg1Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hollance",
"html_url": "https://github.com/hollance",
"followers_url": "https://api.github.com/users/hollance/followers",
"following_url": "https://api.github.com/users/hollance/following{/other_user}",
"gists_url": "https://api.github.com/users/hollance/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hollance/subscriptions",
"organizations_url": "https://api.github.com/users/hollance/orgs",
"repos_url": "https://api.github.com/users/hollance/repos",
"events_url": "https://api.github.com/users/hollance/events{/privacy}",
"received_events_url": "https://api.github.com/users/hollance/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
}
] |
closed
| false |
{
"login": "hollance",
"id": 346853,
"node_id": "MDQ6VXNlcjM0Njg1Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hollance",
"html_url": "https://github.com/hollance",
"followers_url": "https://api.github.com/users/hollance/followers",
"following_url": "https://api.github.com/users/hollance/following{/other_user}",
"gists_url": "https://api.github.com/users/hollance/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hollance/subscriptions",
"organizations_url": "https://api.github.com/users/hollance/orgs",
"repos_url": "https://api.github.com/users/hollance/repos",
"events_url": "https://api.github.com/users/hollance/events{/privacy}",
"received_events_url": "https://api.github.com/users/hollance/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "hollance",
"id": 346853,
"node_id": "MDQ6VXNlcjM0Njg1Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hollance",
"html_url": "https://github.com/hollance",
"followers_url": "https://api.github.com/users/hollance/followers",
"following_url": "https://api.github.com/users/hollance/following{/other_user}",
"gists_url": "https://api.github.com/users/hollance/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hollance/subscriptions",
"organizations_url": "https://api.github.com/users/hollance/orgs",
"repos_url": "https://api.github.com/users/hollance/repos",
"events_url": "https://api.github.com/users/hollance/events{/privacy}",
"received_events_url": "https://api.github.com/users/hollance/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Thanks for the super comprehensive write-up! For sure, let's discuss here what the best approach to fixing this issue is. Here are my thoughts:\r\n1. It's better to return the last segment with an incomplete timestamp rather than dropping it completely - when decoding with timestamps, many users will look just at `\"offsets\"` to get the transcription + timestamps bundled together, and ignore the overall `\"text\"` (since they assume the text in `\"offsets\"` will be the same as in `\"text\"`). If we drop the last segment because it doesn't have a timestamp, we effectively miss a transcription chunk (as well as a timestamp). IMO it's better to return the transcription for this last offset even if it has a missing timestamp, so at least we return the correct transcription overall. So here I do agree that there is a 'bug' in the tokenizer and we should change its behaviour to mirror that of the processor. Happy to remove `tokenizer. _compute_offsets` in favour of `tokenizer._decode_asr` (and if this is not allowed because of breaking changes, then have it call `tokenizer._decode_asr` under the hood).\r\n2. I'm not sure there's a clean way of doing the 'rewind' trick that OpenAI do: `transformers` is very distinct in it's three sequential stages of inference: feature extractor -> model -> tokenizer. OpenAI are a bit more liberal in going between model and tokenizer (e.g. with this 'rewind' trick and their decode with temperature fallback algorithm). Adding the 'rewind' trick to the pipeline method would add a lot of custom complexity that we probably want to avoid. What we can quite easily do is update the warning message to say something along the lines of `\"audio is cut off in the middle of a word, Whisper did not predict an ending timestamp\"` to at least inform users of why the last timestamp is missing.\r\n\r\nAlso cc @Narsil - would be interested in hearing your thoughts on this!",
"Agreed on uniformity in handling those \"incomplete\" things\r\n\r\n1. `(start, None)` is the easiest imo. (I agree with @sanchit-gandhi basically).\r\n2. `rewind` trick is dirty and cannot be done in pipelines. `pipelines` are stateless, and this is what enables orthogonal batching. OpenAI cannot do batching. We could have exactly their code too somewhere else, but not in the aforementionned locations.\r\n\r\nHaving predictable runtime is really important imo, and rewind trick is killing that. Also if the model is super bad (which can happen on random and badly finetuned models) then you'll still have incomplete chunks.\r\n\r\nFor `chunks` we cannot change the output things because of backward compatibity.\r\n",
"Agreed that we should fix this to keep the final segment. I can add it to my list. \r\n\r\nAlso agree that the rewinding trick isn't something we should do, as it interferes with the batching approach. Plus it's kind of a hack anyway.\r\n\r\nKeeping a timestamp of `None` to mean \"end of the input\" is simple on our end, but it might be less convenient for users to interpret what time this actually corresponds to (since the input may have padding and so it's not necessarily the length of the input, and the user may not be able to easily figure out where the padding occurs).\r\n",
"> it might be less convenient for users to interpret\r\n\r\nIndeed, but the timestamp is supposed to be emitted by the model and correspond the actual end of speech.\r\n\r\nThere can be padding, but also just silent audio. Ideally it would be nice to output a sensible default, but here if the model doesn't give us a timestamp, then... well we cannot really do anything about it, and we just don't have the information, trying to recreate something is IMHO lying to the user.\r\n\r\nFor instance, there is nothing preventing the model from outputting timestamps that are out of order even though it wouldn't make sense. but if the model is doing it I think we should just translate what the model is saying, even if nonsensical (gladly this doesn't seem to be actually occurring in the real world)",
"> if the model doesn't give us a timestamp, then... well we cannot really do anything about it, and we just don't have the information, trying to recreate something is IMHO lying to the user.\r\n\r\nFair point. 😄 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,683 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
### Feature request
When the input audio is cut off in the middle of a word, Whisper may not predict an ending timestamp. How we handle this differs between decoding using the tokenizer or using a pipeline. It's also different from how OpenAI handles this.
To see what happens, let's run Whisper:
```python
# load model
from transformers import AutoProcessor, WhisperForConditionalGeneration
processor = AutoProcessor.from_pretrained("openai/whisper-tiny")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny")
tokenizer = processor.tokenizer
# load data
from datasets import load_dataset
dataset = load_dataset(
"hf-internal-testing/librispeech_asr_demo", "clean", split="validation"
)
dataset = dataset.sort("id")
# create example that reproduces the issue
import numpy as np
example1 = dataset[0]["audio"]["array"]
example2 = dataset[1]["audio"]["array"]
example3 = dataset[1]["audio"]["array"]
example = np.concatenate([example1, example2, example3]).astype(np.float32)
example = example[:200000]
# get input spectrogram
inputs = processor(example, sampling_rate=16000, return_tensors="pt")
input_features = inputs.input_features
# make prediction including timestamps
predicted_ids = model.generate(input_features, return_timestamps=True)
processor.decode(predicted_ids[0], decode_with_timestamps=True, output_offsets=True)
```
This outputs:
```python
{'text': "<|startoftranscript|><|en|><|transcribe|><|0.00|> Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.<|6.00|><|6.00|> Nor is Mr. Quilter's manner less interesting than his matter.<|11.00|><|11.00|> Nor is Mr. Quilter's<|endoftext|>",
'offsets': [{'text': ' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.',
'timestamp': (0.0, 6.0)},
{'text': " Nor is Mr. Quilter's manner less interesting than his matter.",
'timestamp': (6.0, 11.0)}]}
```
Notice that the last segment is: ` Nor is Mr. Quilter's<|endoftext|>`. However, there is no entry for this in the `"offsets"` array. This happens because in `tokenization_whisper.py` in `_compute_offsets` the last segment is skipped if it does not end with a timestamp token. The `"text"` output, however, does include that last segment.
OpenAI does the following:
```python
# load model
import whisper
model = whisper.load_model("tiny")
# load example as above...
# make prediction
result = model.transcribe(
example,
verbose=True,
condition_on_previous_text=False,
)
```
This does include the final segment:
```text
[00:00.000 --> 00:06.000] Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.
[00:06.000 --> 00:11.000] Nor is Mr. Quilter's manner less interesting than his matter.
[00:11.000 --> 00:13.000] Norris, Mr. Quilters.
```
The reason the text is different than with our Whisper (`Nor is Mr. Quilter's` vs `Norris, Mr. Quilters.`), is that OpenAI detects that the last segment does not end with a timestamp and is therefore incomplete. It now "rewinds" to the last timestamp token and makes a new prediction from there. This new prediction can be different since the input spectrogram has essentially been shifted in time. Since now only one segment is returned, the OpenAI logic uses the start and end time from the audio as the timestamps for this final segment.
We can also use a `pipeline` to run Whisper:
```python
from transformers import pipeline
pipe = pipeline(task="automatic-speech-recognition", model="openai/whisper-tiny")
pipe(example, return_timestamps=True)
```
This outputs the following:
```python
{'text': " Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel. Nor is Mr. Quilter's manner less interesting than his matter. Nor is Mr. Quilter's",
'chunks': [{'timestamp': (0.0, 6.0),
'text': ' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.'},
{'timestamp': (6.0, 11.0),
'text': " Nor is Mr. Quilter's manner less interesting than his matter."},
{'timestamp': (11.0, None), 'text': " Nor is Mr. Quilter's"}]}
```
Here the list containing the timestamps is named `"chunks"` instead of `"offsets"` but otherwise contains the same information. But now it does include the final segment. The ending timestamp is None.
It also outputs a warning because the last segment doesn't have an ending timestamp (yes, WhisperTimeStampLogitsProcessor was used):
> "There was an error while processing timestamps, we haven't found a timestamp as last token. Was WhisperTimeStampLogitsProcessor used?"
Long story short:
- The behavior of `tokenizer.decode(..., with_offsets=True)` is different from the pipeline with `return_timestamps=True`, and both are different from what OpenAI does (`None` instead of the actual ending timestamp, no rewinding). In addition, the pipeline does not include the timestamps in the text but `tokenizer.decode()` does.
- Is the `tokenizer.decode(..., with_offsets=True)` behavior a bug?
- The implementation of how the pipeline implements this "split up segments by timestamps" (`tokenizer._decode_asr`) is different from how the tokenizer implements it (`tokenizer._compute_offsets`). So we have two different implementations doing the same thing but with different results.
- Could we perhaps make this a bit more consistent? The pipeline calls the returned segments "chunks", but also uses this same term for splitting up the audio into partially overlapping 30-second slices. Very confusing.
### Motivation
I'm currently adding word-level timestamps to Whisper: https://github.com/huggingface/transformers/pull/23205
In the OpenAI implementation these timestamps are added to the returned segments. Obviously if the last segment isn't being included, we can't add the word-level timestamps there. The word-level timestamps should also work in the pipeline.
### Your contribution
Rather than just submitting a PR to fix this issue, I'm opening this up for discussion to decide how we want to handle this, as it affects multiple pieces of an already complex system.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23231/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23231/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23230
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23230/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23230/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23230/events
|
https://github.com/huggingface/transformers/issues/23230
| 1,701,816,490 |
I_kwDOCUB6oc5lb6iq
| 23,230 |
llama model can't generate EOS
|
{
"login": "ZhangMaoTai",
"id": 102578441,
"node_id": "U_kgDOBh05CQ",
"avatar_url": "https://avatars.githubusercontent.com/u/102578441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhangMaoTai",
"html_url": "https://github.com/ZhangMaoTai",
"followers_url": "https://api.github.com/users/ZhangMaoTai/followers",
"following_url": "https://api.github.com/users/ZhangMaoTai/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhangMaoTai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhangMaoTai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhangMaoTai/subscriptions",
"organizations_url": "https://api.github.com/users/ZhangMaoTai/orgs",
"repos_url": "https://api.github.com/users/ZhangMaoTai/repos",
"events_url": "https://api.github.com/users/ZhangMaoTai/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhangMaoTai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @ArthurZucker ",
"Hey!\r\nThis seems to be a bit similar to #23175. \r\nWhen the `generate` function is called, it should stop once the `eos_token` (which is `2`). \r\nIf the model does not predict it, then the generate function will not stop. This can come from the training, but is most probably not an issue with the `generate` function. \r\n\r\nYou can check the original behaviour here: https://github.com/facebookresearch/llama/blob/main/llama/generation.py you'll see that it does not stop on the `eos` token. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"what was the soln?",
"Same issue",
"The model does not predict it with the generation parameters that are used. "
] | 1,683 | 1,698 | 1,688 |
NONE
| null |
### System Info
python 3.8.16
torch 1.13.1
transformers 4.28.0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
# both model have same behavior
model_path = "luodian/llama-7b-hf"
# model_path = "huggyllama/llama-7b"
model = LlamaForCausalLM.from_pretrained(model_path)
tokenizer = LlamaTokenizer.from_pretrained(model_path)
tokenizer.pad_token = tokenizer.eos_token
text = ["Translate english to chinese: I love you.", "What is your name:"]
a = tokenizer(text, return_tensors='pt',padding="longest")
print(model.generate(**a, max_new_tokens=64))
```
### Expected behavior
The llama model's generate method doesn't generate any EOS token under any circumstances
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23230/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23230/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23229
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23229/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23229/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23229/events
|
https://github.com/huggingface/transformers/issues/23229
| 1,701,805,591 |
I_kwDOCUB6oc5lb34X
| 23,229 |
OSError: sentence-transformers/all-distilroberta-v1 is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
|
{
"login": "alexcoca",
"id": 30216068,
"node_id": "MDQ6VXNlcjMwMjE2MDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/30216068?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexcoca",
"html_url": "https://github.com/alexcoca",
"followers_url": "https://api.github.com/users/alexcoca/followers",
"following_url": "https://api.github.com/users/alexcoca/following{/other_user}",
"gists_url": "https://api.github.com/users/alexcoca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexcoca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexcoca/subscriptions",
"organizations_url": "https://api.github.com/users/alexcoca/orgs",
"repos_url": "https://api.github.com/users/alexcoca/repos",
"events_url": "https://api.github.com/users/alexcoca/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexcoca/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Experiencing the same issue with a different model. Looks like [the entire sentence-transformers page](https://huggingface.co/sentence-transformers) is down. Hopefully a fix is on the way.\r\n\r\nUPDATE: Indeed, it looks like HuggingFace is aware of the issues and working on it: https://twitter.com/huggingface/status/1655760648926642178",
"Experiencing the same issue with a model from `cross-encoder`... Looks like the entire [cross-encoder page](https://huggingface.co/cross-encoder) is down. ",
"Hi @alexcoca, thanks for raising this issue! \r\n\r\nWe're unfortunately experiencing a bug which means some popular organisations like sentence-transformers have had their model temporarily disappear from the Hub.\r\n\r\nThey will come back; we're working hard on getting this fixed ASAP! Apologies for the disruption.",
"@amyeroberts thanks for your hard work, everything is back to normal, I think. For future reference, users should know that `from_pretrained` methods have a `local_files_only` flag that can be passed to load a model that has been cached locally before. This can help in situations like this, thanks @Wauplin for pointing this out."
] | 1,683 | 1,683 | 1,683 |
NONE
| null |
### System Info
- `transformers` version: 4.28.1
- Platform: macOS-13.3-x86_64-i386-64bit
- Python version: 3.9.6
- Huggingface_hub version: 0.13.4
- Safetensors version: not installed
- PyTorch version (GPU?): 1.13.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the following code:
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("sentence-transformers/all-distilroberta-v1")
```
This fails with the OSError in the issue title. I would normally raise this issue on the hub, but then to my amazement, `sentence-transformers/all-distilroberta-v1` does not exist on the hub. Our code reliably worked for months, so I presume that this model, quite well known actually, was in the space at some point. I wonder why model loading no longer works. @younesbelkada , any idea?
### Expected behavior
Model is loaded.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23229/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23229/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23228
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23228/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23228/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23228/events
|
https://github.com/huggingface/transformers/issues/23228
| 1,701,747,527 |
I_kwDOCUB6oc5lbptH
| 23,228 |
504 Server Error: Gateway Time-out for BertTokenizer
|
{
"login": "ZhengMengbin",
"id": 26650043,
"node_id": "MDQ6VXNlcjI2NjUwMDQz",
"avatar_url": "https://avatars.githubusercontent.com/u/26650043?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhengMengbin",
"html_url": "https://github.com/ZhengMengbin",
"followers_url": "https://api.github.com/users/ZhengMengbin/followers",
"following_url": "https://api.github.com/users/ZhengMengbin/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhengMengbin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhengMengbin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhengMengbin/subscriptions",
"organizations_url": "https://api.github.com/users/ZhengMengbin/orgs",
"repos_url": "https://api.github.com/users/ZhengMengbin/repos",
"events_url": "https://api.github.com/users/ZhengMengbin/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhengMengbin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @ZhengMengbin, thanks for raising this issue. \r\n\r\nWe had a short outage of the hub website and API earlier today, which is likely the result of the 504 error. I'm able to load the `bert-base-uncased` checkpoint locally on the dev branch of transformers.\r\n\r\nIf the error persists on your end, could you reply with a reproducible code snippet and information about your running environment (run `transformers-cli env` in your terminal)?",
"> Hi @ZhengMengbin, thanks for raising this issue.\r\n> \r\n> We had a short outage of the hub website and API earlier today, which is likely the result of the 504 error. I'm able to load the `bert-base-uncased` checkpoint locally on the dev branch of transformers.\r\n> \r\n> If the error persists on your end, could you reply with a reproducible code snippet and information about your running environment (run `transformers-cli env` in your terminal)?\r\n\r\nI have a similar issue: huggingface_hub.utils._errors.HfHubHTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/models/gpt2.\r\nHere's what I got after run transformers-cli env:\r\n- `transformers` version: 4.10.3\r\n- Platform: Linux-5.15.0-71-generic-x86_64-with-glibc2.31\r\n- Python version: 3.9.15\r\n- PyTorch version (GPU?): 1.13.0+cu117 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: yes\r\n- Using distributed or parallel set-up in script?:no\r\n",
"Same issue here, requesting bert-base-multilingual-cased \r\n```\r\nhuggingface_hub.utils._errors.HfHubHTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/models/bert-base-multilingual-cased\r\n```\r\n\r\nAccessing this URL via browser grants a CloudFront error. \r\n\r\n\r\n",
"This seems to be working again since approx 10 Minutes. \r\nThe model can be loaded (although slow) via code and the api is answering when accessed via browser. \r\n",
"@mkuf @YiranHuangIrene Thanks for the additional information. \r\n\r\nUnfortunately we're still experiencing some issues with the hub which we're actively trying to resolve. Some of the features have come back online but we haven't returned to a full service yet. Apologies for the disruption. \r\n\r\nI'll reply here when I hear everything should be back to normal. Our [HF status twitter](https://twitter.com/hf_status) account is the best place to find the most up to date info on progress, and [status page](https://status.huggingface.co/) to see the current status.",
"Autotokenizer doesn't seem to work for any of the pretrained models: roberta, bert or distilled versions. Reason being Bad Gateway error:\r\n\r\n`OSError: There was a specific connection error when trying to load distilbert-base-uncased:\r\n504 Server Error: Gateway Time-out for url: [https://huggingface.co/distilbert-base-uncased/resolve/main/config.json`](https://huggingface.co/distilbert-base-uncased/resolve/main/config.json%60)\r\n\r\nI am using `requests==2.27.1` and no certificate validation `os.environ['CURL_CA_BUNDLE'] = ''`",
"Yes the website is currently experiencing some issues. Should come back in a few minutes, you can check the status [here](https://status.huggingface.co/)."
] | 1,683 | 1,684 | 1,683 |
NONE
| null |
`File /usr/local/lib/python3.9/dist-packages/huggingface_hub/utils/_errors.py:259, in hf_raise_for_status(response, endpoint_name)
258 try:
--> 259 response.raise_for_status()
260 except HTTPError as e:
File /usr/local/lib/python3.9/dist-packages/requests/models.py:1021, in Response.raise_for_status(self)
1020 if http_error_msg:
-> 1021 raise HTTPError(http_error_msg, response=self)
HTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/models/bert-base-uncased/tree/main?recursive=True
The above exception was the direct cause of the following exception:
HfHubHTTPError Traceback (most recent call last)
Cell In[6], line 2
1 from transformers import BertTokenizer
----> 2 tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
File ~/.local/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:1654, in PreTrainedTokenizerBase.from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1651 vocab_files[file_id] = pretrained_model_name_or_path
1652 else:
1653 # At this point pretrained_model_name_or_path is either a directory or a model identifier name
-> 1654 fast_tokenizer_file = get_fast_tokenizer_file(
1655 pretrained_model_name_or_path,
1656 revision=revision,
1657 use_auth_token=use_auth_token,
1658 local_files_only=local_files_only,
1659 )
1660 additional_files_names = {
1661 "added_tokens_file": ADDED_TOKENS_FILE,
1662 "special_tokens_map_file": SPECIAL_TOKENS_MAP_FILE,
1663 "tokenizer_config_file": TOKENIZER_CONFIG_FILE,
1664 "tokenizer_file": fast_tokenizer_file,
1665 }
1666 # Look for the tokenizer files
File ~/.local/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:3486, in get_fast_tokenizer_file(path_or_repo, revision, use_auth_token, local_files_only)
3466 """
3467 Get the tokenizer file to use for this version of transformers.
3468
(...)
3483 `str`: The tokenizer file to use.
3484 """
3485 # Inspect all files from the repo/folder.
-> 3486 all_files = get_list_of_files(
3487 path_or_repo, revision=revision, use_auth_token=use_auth_token, local_files_only=local_files_only
3488 )
3489 tokenizer_files_map = {}
3490 for file_name in all_files:
File ~/.local/lib/python3.9/site-packages/transformers/file_utils.py:2103, in get_list_of_files(path_or_repo, revision, use_auth_token, local_files_only)
2101 else:
2102 token = None
-> 2103 return list_repo_files(path_or_repo, revision=revision, token=token)
File /usr/local/lib/python3.9/dist-packages/huggingface_hub/utils/_deprecation.py:103, in _deprecate_arguments.<locals>._inner_deprecate_positional_args.<locals>.inner_f(*args, **kwargs)
101 message += "\n\n" + custom_message
102 warnings.warn(message, FutureWarning)
--> 103 return f(*args, **kwargs)
File /usr/local/lib/python3.9/dist-packages/huggingface_hub/utils/_validators.py:120, in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs)
117 if check_use_auth_token:
118 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)
--> 120 return fn(*args, **kwargs)
File /usr/local/lib/python3.9/dist-packages/huggingface_hub/hf_api.py:1966, in HfApi.list_repo_files(self, repo_id, revision, repo_type, timeout, token)
1936 @_deprecate_arguments(version="0.17", deprecated_args=["timeout"], custom_message="timeout is not used anymore.")
1937 @validate_hf_hub_args
1938 def list_repo_files(
(...)
1945 token: Optional[Union[bool, str]] = None,
1946 ) -> List[str]:
1947 """
1948 Get the list of files in a given repo.
1949
(...)
1964 `List[str]`: the list of files in a given repository.
1965 """
-> 1966 return [
1967 f.rfilename
1968 for f in self.list_files_info(
1969 repo_id=repo_id, paths=None, revision=revision, repo_type=repo_type, token=token
1970 )
1971 ]
File /usr/local/lib/python3.9/dist-packages/huggingface_hub/hf_api.py:1966, in <listcomp>(.0)
1936 @_deprecate_arguments(version="0.17", deprecated_args=["timeout"], custom_message="timeout is not used anymore.")
1937 @validate_hf_hub_args
1938 def list_repo_files(
(...)
1945 token: Optional[Union[bool, str]] = None,
1946 ) -> List[str]:
1947 """
1948 Get the list of files in a given repo.
1949
(...)
1964 `List[str]`: the list of files in a given repository.
1965 """
-> 1966 return [
1967 f.rfilename
1968 for f in self.list_files_info(
1969 repo_id=repo_id, paths=None, revision=revision, repo_type=repo_type, token=token
1970 )
1971 ]
File /usr/local/lib/python3.9/dist-packages/huggingface_hub/hf_api.py:1932, in HfApi.list_files_info(self, repo_id, paths, revision, repo_type, token)
1930 encoded_path = "/" + quote(path, safe="") if path else ""
1931 tree_url = f"{self.endpoint}/api/{repo_type}s/{repo_id}/tree/{revision}{encoded_path}"
-> 1932 for subpath_info in paginate(path=tree_url, headers=headers, params={"recursive": True}):
1933 if subpath_info["type"] == "file":
1934 yield _format_as_repo_file(subpath_info)
File /usr/local/lib/python3.9/dist-packages/huggingface_hub/utils/_pagination.py:36, in paginate(path, params, headers)
34 session = get_session()
35 r = session.get(path, params=params, headers=headers)
---> 36 hf_raise_for_status(r)
37 yield from r.json()
39 # Follow pages
40 # Next link already contains query params
File /usr/local/lib/python3.9/dist-packages/huggingface_hub/utils/_errors.py:301, in hf_raise_for_status(response, endpoint_name)
297 raise BadRequestError(message, response=response) from e
299 # Convert `HTTPError` into a `HfHubHTTPError` to display request information
300 # as well (request id and/or server error message)
--> 301 raise HfHubHTTPError(str(e), response=response) from e
HfHubHTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/models/bert-base-uncased/tree/main?recursive=True`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23228/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/23228/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23227
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23227/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23227/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23227/events
|
https://github.com/huggingface/transformers/pull/23227
| 1,701,701,322 |
PR_kwDOCUB6oc5QE6QH
| 23,227 |
Fix typo ; Update output.mdx
|
{
"login": "furkanakkurt1335",
"id": 71407287,
"node_id": "MDQ6VXNlcjcxNDA3Mjg3",
"avatar_url": "https://avatars.githubusercontent.com/u/71407287?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/furkanakkurt1335",
"html_url": "https://github.com/furkanakkurt1335",
"followers_url": "https://api.github.com/users/furkanakkurt1335/followers",
"following_url": "https://api.github.com/users/furkanakkurt1335/following{/other_user}",
"gists_url": "https://api.github.com/users/furkanakkurt1335/gists{/gist_id}",
"starred_url": "https://api.github.com/users/furkanakkurt1335/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/furkanakkurt1335/subscriptions",
"organizations_url": "https://api.github.com/users/furkanakkurt1335/orgs",
"repos_url": "https://api.github.com/users/furkanakkurt1335/repos",
"events_url": "https://api.github.com/users/furkanakkurt1335/events{/privacy}",
"received_events_url": "https://api.github.com/users/furkanakkurt1335/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,683 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes a typo.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23227/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23227/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23227",
"html_url": "https://github.com/huggingface/transformers/pull/23227",
"diff_url": "https://github.com/huggingface/transformers/pull/23227.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23227.patch",
"merged_at": 1683638378000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23226
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23226/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23226/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23226/events
|
https://github.com/huggingface/transformers/issues/23226
| 1,701,539,107 |
I_kwDOCUB6oc5la20j
| 23,226 |
NSP Support for Zero-shot Text Classification Pipeline
|
{
"login": "emrecncelik",
"id": 20845117,
"node_id": "MDQ6VXNlcjIwODQ1MTE3",
"avatar_url": "https://avatars.githubusercontent.com/u/20845117?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emrecncelik",
"html_url": "https://github.com/emrecncelik",
"followers_url": "https://api.github.com/users/emrecncelik/followers",
"following_url": "https://api.github.com/users/emrecncelik/following{/other_user}",
"gists_url": "https://api.github.com/users/emrecncelik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emrecncelik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emrecncelik/subscriptions",
"organizations_url": "https://api.github.com/users/emrecncelik/orgs",
"repos_url": "https://api.github.com/users/emrecncelik/repos",
"events_url": "https://api.github.com/users/emrecncelik/events{/privacy}",
"received_events_url": "https://api.github.com/users/emrecncelik/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1771187924,
"node_id": "MDU6TGFiZWwxNzcxMTg3OTI0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline",
"name": "Core: Pipeline",
"color": "FF7066",
"default": false,
"description": "Internals of the library; Pipeline."
},
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] |
open
| false | null |
[] |
[
"cc @Narsil "
] | 1,683 | 1,683 | null |
NONE
| null |
### Feature request
Zero-shot classification can be solved with NextSentencePrediction task of BERT, and it has shown competitive results to NLI-based zero-shot classification in some cases. There could be a parameter where we choose the type of submethod that we are going to use for the pipeline like `pipeline(task="zero-shot-classification", type_="nsp")` or we could just simply add a task named "nsp-zeroshot-classification". This is also possible for MLM, which is a more widely used pretraining task across LMs.
### Motivation
Like I said, NSP has proven to be useful especially for languages that do not have access to NLI dataset since only pre-training is enough. Although multilingual NLI models can also be used, they have been proven to be worse compared to smaller monolingual models in this task, as one would expect. Even if this is a small detail which would be unnecessary to put into the codebase, I wanted to share this implementation so that anyone who's interested can take a look and try different methods.
Here are some references, one of which is my study, that use NSP for zero-shot classification.
Sun, Y., Zheng, Y., Hao, C., & Qiu, H. (2021). NSP-BERT: A Prompt-based Zero-Shot Learner Through an Original Pre-training Task--Next Sentence Prediction. arXiv preprint arXiv:2109.03564.
Çelik, E., & Dalyan, T. (2023). Unified benchmark for zero-shot Turkish text classification. Information Processing & Management, 60(3), 103298.
### Your contribution
I can open a PR, here's the implementation I did based on Sun et al. 2021. It is heaily based on the current NLI zeroshot pipeline class, but also adds a `reverse` argument which changes the order of the sentences for NSP.
```python
import numpy as np
from typing import List, Union
from transformers.utils import logging
from transformers.pipelines.base import ChunkPipeline, ArgumentHandler
from transformers.tokenization_utils import TruncationStrategy
from transformers.pipelines import ZeroShotClassificationArgumentHandler
logger = logging.get_logger(__name__)
class ZeroShotClassificationArgumentHandler(ArgumentHandler):
def _parse_labels(self, labels):
if isinstance(labels, str):
labels = [label.strip() for label in labels.split(",") if label.strip()]
return labels
def __call__(self, sequences, labels, hypothesis_template, reverse):
if len(labels) == 0 or len(sequences) == 0:
raise ValueError(
"You must include at least one label and at least one sequence."
)
if hypothesis_template.format(labels[0]) == hypothesis_template:
raise ValueError(
(
'The provided hypothesis_template "{}" was not able to be formatted with the target labels. '
"Make sure the passed template includes formatting syntax such as {{}} where the label should go."
).format(hypothesis_template)
)
if isinstance(sequences, str):
sequences = [sequences]
sequence_pairs = []
for sequence in sequences:
if reverse:
sequence_pairs.extend(
[[hypothesis_template.format(label), sequence] for label in labels]
)
else:
sequence_pairs.extend(
[[sequence, hypothesis_template.format(label)] for label in labels]
)
return sequence_pairs, sequences
class NSPZeroShotClassificationPipeline(ChunkPipeline):
def __init__(
self, args_parser=ZeroShotClassificationArgumentHandler(), *args, **kwargs
):
self._args_parser = args_parser
super().__init__(*args, **kwargs)
@property
def isNext_id(self):
return 0
def _parse_and_tokenize(
self,
sequence_pairs,
padding=True,
add_special_tokens=True,
truncation=TruncationStrategy.ONLY_FIRST,
**kwargs,
):
return_tensors = self.framework
if self.tokenizer.pad_token is None:
logger.error(
"Tokenizer was not supporting padding necessary for zero-shot, attempting to use "
" `pad_token=eos_token`"
)
self.tokenizer.pad_token = self.tokenizer.eos_token
try:
inputs = self.tokenizer(
sequence_pairs,
add_special_tokens=add_special_tokens,
return_tensors=return_tensors,
padding=padding,
truncation=truncation,
)
except Exception as e:
if "too short" in str(e):
inputs = self.tokenizer(
sequence_pairs,
add_special_tokens=add_special_tokens,
return_tensors=return_tensors,
padding=padding,
truncation=TruncationStrategy.DO_NOT_TRUNCATE,
)
else:
raise e
return inputs
def _sanitize_parameters(self, **kwargs):
if kwargs.get("multi_class", None) is not None:
kwargs["multi_label"] = kwargs["multi_class"]
logger.warning(
"The `multi_class` argument has been deprecated and renamed to `multi_label`. "
"`multi_class` will be removed in a future version of Transformers."
)
preprocess_params = {}
if "candidate_labels" in kwargs:
preprocess_params["candidate_labels"] = self._args_parser._parse_labels(
kwargs["candidate_labels"]
)
if "hypothesis_template" in kwargs:
preprocess_params["hypothesis_template"] = kwargs["hypothesis_template"]
if "reverse" in kwargs:
preprocess_params["reverse"] = kwargs["reverse"]
postprocess_params = {}
if "multi_label" in kwargs:
postprocess_params["multi_label"] = kwargs["multi_label"]
return preprocess_params, {}, postprocess_params
def __call__(
self,
sequences: Union[str, List[str]],
*args,
**kwargs,
):
if len(args) == 0:
pass
elif len(args) == 1 and "candidate_labels" not in kwargs:
kwargs["candidate_labels"] = args[0]
else:
raise ValueError(f"Unable to understand extra arguments {args}")
return super().__call__(sequences, **kwargs)
def preprocess(
self,
inputs,
candidate_labels=None,
hypothesis_template="This example is {}.",
reverse=False,
):
sequence_pairs, sequences = self._args_parser(
inputs, candidate_labels, hypothesis_template, reverse
)
for i, (candidate_label, sequence_pair) in enumerate(
zip(candidate_labels, sequence_pairs)
):
model_input = self._parse_and_tokenize([sequence_pair])
yield {
"candidate_label": candidate_label,
"sequence": sequences[0],
"is_last": i == len(candidate_labels) - 1,
**model_input,
}
def _forward(self, inputs):
candidate_label = inputs["candidate_label"]
sequence = inputs["sequence"]
model_inputs = {k: inputs[k] for k in self.tokenizer.model_input_names}
outputs = self.model(**model_inputs)
model_outputs = {
"candidate_label": candidate_label,
"sequence": sequence,
"is_last": inputs["is_last"],
**outputs,
}
return model_outputs
def postprocess(self, model_outputs, multi_label=False):
candidate_labels = [outputs["candidate_label"] for outputs in model_outputs]
sequences = [outputs["sequence"] for outputs in model_outputs]
logits = np.concatenate([output["logits"].numpy() for output in model_outputs])
N = logits.shape[0]
n = len(candidate_labels)
num_sequences = N // n
reshaped_outputs = logits.reshape((num_sequences, n, -1))
if multi_label or len(candidate_labels) == 1:
isNext_id = self.isNext_id
notNext_id = 1
isNext_contr_logits = reshaped_outputs[..., [notNext_id, isNext_id]]
scores = np.exp(isNext_contr_logits) / np.exp(isNext_contr_logits).sum(
-1, keepdims=True
)
scores = scores[..., 1]
else:
isNext_logits = reshaped_outputs[..., self.isNext_id]
scores = np.exp(isNext_logits) / np.exp(isNext_logits).sum(
-1, keepdims=True
)
top_inds = list(reversed(scores[0].argsort()))
return {
"sequence": sequences[0],
"labels": [candidate_labels[i] for i in top_inds],
"scores": scores[0, top_inds].tolist(),
}
```
This task can be used by registering it to the tasks, shown in example below:
```python
from nsp import NSPZeroShotClassificationPipeline
from transformers.pipelines import PIPELINE_REGISTRY
from transformers import BertForNextSentencePrediction, TFBertForNextSentencePrediction
PIPELINES = [
dict(
task="nsp-zeroshot-classification",
pipeline_class=NSPZeroShotClassificationPipeline,
pt_model=BertForNextSentencePrediction,
tf_model=TFBertForNextSentencePrediction,
default={"pt": ("bert-base-uncased")},
type="text",
)
]
for p in PIPELINES:
PIPELINE_REGISTRY.register_pipeline(**p)
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23226/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23226/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/23225
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23225/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23225/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23225/events
|
https://github.com/huggingface/transformers/pull/23225
| 1,701,501,848 |
PR_kwDOCUB6oc5QEP4E
| 23,225 |
fix: Update run_qa.py to work with deepset/germanquad
|
{
"login": "sjrl",
"id": 10526848,
"node_id": "MDQ6VXNlcjEwNTI2ODQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/10526848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sjrl",
"html_url": "https://github.com/sjrl",
"followers_url": "https://api.github.com/users/sjrl/followers",
"following_url": "https://api.github.com/users/sjrl/following{/other_user}",
"gists_url": "https://api.github.com/users/sjrl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sjrl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sjrl/subscriptions",
"organizations_url": "https://api.github.com/users/sjrl/orgs",
"repos_url": "https://api.github.com/users/sjrl/repos",
"events_url": "https://api.github.com/users/sjrl/events{/privacy}",
"received_events_url": "https://api.github.com/users/sjrl/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23225). All of your documentation changes will be reflected on that endpoint.",
"Could you explain what fails when leaving the code as is?",
"Sure! I get a `pyarrow` error that the format of `predictions` does not match the defined schema for `predictions` which expects the ID field to be of type string. I'm a bit busy at the moment, but later I can reproduce the error and copy paste the message here. ",
"That's is a bit weird as we only use pyarrow through `dataset` but this is after the dataset creation.",
"Is pyarrow also used in the Evaluation library for computing the squad_v2 metrics? It seemed the schema enforcement was for the predictions format because germanquad has its own schema in its dataset repo. ",
"Ah good catch! Yes I get the issue now."
] | 1,683 | 1,684 | 1,683 |
CONTRIBUTOR
| null |
# What does this PR do?
This updates the `run_qa.py` script in the `examples/pytorch/question-answering` folder to work with the `deepset/germanquad` dataset. The script expects the ID field for each example in the dataset to be a string type, but the previously mentioned dataset stores the ID field as an int. So I added a `str` call on the id field to make sure any integer IDs are converted into the expected string format for squad datasets when using the `evaluate` library. This is relevant for https://huggingface.co/datasets/deepset/germanquad which stores the IDs of each example with an integer ID.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #N/A
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Hey @sgugger I would appreciate your review on this PR! I tagged you based on the recommendation of the PR template since you are listed as the maintainer of the pytorch examples.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23225/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23225/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23225",
"html_url": "https://github.com/huggingface/transformers/pull/23225",
"diff_url": "https://github.com/huggingface/transformers/pull/23225.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23225.patch",
"merged_at": 1683638411000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23224
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23224/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23224/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23224/events
|
https://github.com/huggingface/transformers/pull/23224
| 1,701,475,370 |
PR_kwDOCUB6oc5QEKLr
| 23,224 |
[SAM] Add resources
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23224). All of your documentation changes will be reflected on that endpoint."
] | 1,683 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds links to 2 demo notebooks I made regarding SAM.
It also fixes a hyperlink which didn't render properly in the docs.
cc @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23224/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23224/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23224",
"html_url": "https://github.com/huggingface/transformers/pull/23224",
"diff_url": "https://github.com/huggingface/transformers/pull/23224.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23224.patch",
"merged_at": 1683637101000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23223
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23223/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23223/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23223/events
|
https://github.com/huggingface/transformers/pull/23223
| 1,701,256,940 |
PR_kwDOCUB6oc5QDand
| 23,223 |
Fix wav2vec2 is_batched check to include 2-D numpy arrays
|
{
"login": "LWprogramming",
"id": 13173037,
"node_id": "MDQ6VXNlcjEzMTczMDM3",
"avatar_url": "https://avatars.githubusercontent.com/u/13173037?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LWprogramming",
"html_url": "https://github.com/LWprogramming",
"followers_url": "https://api.github.com/users/LWprogramming/followers",
"following_url": "https://api.github.com/users/LWprogramming/following{/other_user}",
"gists_url": "https://api.github.com/users/LWprogramming/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LWprogramming/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LWprogramming/subscriptions",
"organizations_url": "https://api.github.com/users/LWprogramming/orgs",
"repos_url": "https://api.github.com/users/LWprogramming/repos",
"events_url": "https://api.github.com/users/LWprogramming/events{/privacy}",
"received_events_url": "https://api.github.com/users/LWprogramming/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Note: I was having some trouble running the relevant test(s). For instance, `pytest tests/models/wav2vec2/test_feature_extraction_wav2vec2.py` fails with \r\n\r\n```\r\nFile ... path/to/file/transformers/src/transformers/training_args.py\", line 67, in <module>\r\n from accelerate import PartialState\r\nImportError: cannot import name 'PartialState' from 'accelerate' (/Users/leonwu/opt/anaconda3/lib/python3.9/site-packages/accelerate/__init__.py)\r\n```\r\n\r\nI suspect this might be related to #22816, but wasn't sure if I should downgrade `transformers` itself if I'm trying to make a PR.\r\n\r\n```\r\n$ transformers-cli env\r\n\r\n- `transformers` version: 4.29.0.dev0\r\n- Platform: macOS-13.2.1-arm64-arm-64bit\r\n- Python version: 3.10.10\r\n- Huggingface_hub version: 0.14.1\r\n- Safetensors version: not installed\r\n- PyTorch version (GPU?): 2.0.1 (False)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: no\r\n- Using distributed or parallel set-up in script?: no\r\n```\r\n\r\nand my virtual environment's `accelerate` library is `0.19.0`.",
"_The documentation is not available anymore as the PR was closed or merged._",
"Regarding the venv issue you're facing, could you try to isolate it with:\r\n```python\r\nfrom transformers import training_args\r\n```\r\nI can't reproduce this error using `transformers` from main - could you try rebasing onto main to make sure it's not already been fixed?",
"> Regarding the venv issue you're facing, could you try to isolate it with:\r\n> \r\n> ```python\r\n> from transformers import training_args\r\n> ```\r\n> \r\n> I can't reproduce this error using `transformers` from main - could you try rebasing onto main to make sure it's not already been fixed?\r\n\r\nthat one's fixed, but I ran into a different issue which looks quite a bit like https://github.com/huggingface/transformers/issues/18355#issuecomment-1200940810. I'm going to try the instructions there-- installation probably going to take a while :))\r\n\r\nEDIT this kind of works potentially https://github.com/huggingface/transformers/issues/18355#issuecomment-1543277694",
"Noting here that I needed to separately `pip install parameterized` for some reason, but I've added the tests and confirmed they work now!",
"d'oh, I gotta be more careful with copilot generations! fixed",
"Cool! This looks ready to me @LWprogramming 👍 Would you mind just running the quality fix up:\r\n```\r\nmake style\r\n```\r\nAnd then pushing the change? This should fix the failing code quality test and re-trigger the CI",
"Is there a way to try running tests non-locally besides Circle CI? The `examples_torch` is failing on a wav2vec thing but I'm unsure if the bf16 unexpected result is a problem with my code, and when I run it locally with `pytest --make-reports=examples_torch ./examples/pytorch/ | tee tests_output.txt` it looks extremely slow.",
"Failing test looks unrelated!"
] | 1,683 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #22175 so it treats 2-D numpy arrays as being batched.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23223/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23223/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23223",
"html_url": "https://github.com/huggingface/transformers/pull/23223",
"diff_url": "https://github.com/huggingface/transformers/pull/23223.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23223.patch",
"merged_at": 1684774665000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23222
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23222/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23222/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23222/events
|
https://github.com/huggingface/transformers/issues/23222
| 1,701,087,107 |
I_kwDOCUB6oc5lZIeD
| 23,222 |
ASR example doesn't save tokenizer settings
|
{
"login": "RobertBaruch",
"id": 1783950,
"node_id": "MDQ6VXNlcjE3ODM5NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1783950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RobertBaruch",
"html_url": "https://github.com/RobertBaruch",
"followers_url": "https://api.github.com/users/RobertBaruch/followers",
"following_url": "https://api.github.com/users/RobertBaruch/following{/other_user}",
"gists_url": "https://api.github.com/users/RobertBaruch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RobertBaruch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RobertBaruch/subscriptions",
"organizations_url": "https://api.github.com/users/RobertBaruch/orgs",
"repos_url": "https://api.github.com/users/RobertBaruch/repos",
"events_url": "https://api.github.com/users/RobertBaruch/events{/privacy}",
"received_events_url": "https://api.github.com/users/RobertBaruch/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The comment on `Trainer.push_to_hub` does say `Upload *self.model* and *self.tokenizer* to the 🤗 model hub`. And in fact, it does call the trainer's `tokenizer.save_pretrained` function. However, in `run_speech_recognition_ctc.py`, `tokenizer` is set to `feature_extractor` in the initialization, and `Wav2Vec2FeatureExtractor.save_pretrained` does not save tokenizer settings.",
"When I replace these lines at the end of `run_speech_recognition_ctc` from this:\r\n```py\r\n if training_args.push_to_hub:\r\n trainer.push_to_hub(**kwargs)\r\n else:\r\n trainer.create_model_card(**kwargs)\r\n```\r\nto this:\r\n```py\r\n tokenizer.save_pretrained(training_args.output_dir)\r\n trainer.create_model_card(**kwargs)\r\n if training_args.push_to_hub:\r\n trainer.push_to_hub(**kwargs)\r\n```\r\nwe do get tokenizer files. Also, may as well write the model card in any case.",
"cc @sanchit-gandhi ",
"The code in the `run_speech_recognition_ctc.py` script as well as the instructions from the [ASR guide](https://huggingface.co/docs/transformers/tasks/asr) that you used in issue https://github.com/huggingface/transformers/issues/23188 do the following:\r\n\r\n```python\r\ntrainer = Trainer(\r\n ...\r\n tokenizer=processor.feature_extractor,\r\n ...\r\n)\r\n```\r\n\r\nThe \"processor\" combines the feature extractor and tokenizer into a single class, but because we only pass the feature extractor to the Trainer, the tokenizer doesn't get saved. So that's clearly a mistake on our end.\r\n\r\nThe following fix should work:\r\n\r\n```python\r\ntrainer = Trainer(\r\n ...\r\n tokenizer=processor,\r\n ...\r\n)\r\n```\r\n\r\nWe're updating the docs to fix this. (It's a bit confusing that this argument from Trainer is called `tokenizer` but that's what's responsible for saving the non-model stuff.)\r\n",
"Probably we can directly add a new argument to the `Trainer` for the processor @hollance? This would stop all confusion IMO:\r\n```python\r\ntrainer = Trainer(\r\n ...\r\n processor=processor,\r\n ...\r\n)\r\n```\r\nHere we could expect the user to pass either one of `tokenizer` or `processor` to the `Trainer`. Within the `Trainer` we only use the `tokenizer` to get the model input name, which after #20117 we can now get directly from the `processor`.",
"Can confirm, setting `tokenizer=processor` in `run_speech_recognition_ctc.py` works. Agree that `tokenizer` is a bit of a misleading keyword then.",
"Keeping this open since we really should update the Trainer to take `processor` as an argument over `tokenizer=processor`",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,683 | 1,691 | 1,691 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.28.1
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.11.2
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run training using [run_speech_recognition_ctc.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py) and the included json file.
[train.json.zip](https://github.com/huggingface/transformers/files/11425889/train.json.zip)
Next, attempt to infer using the trained model:
```py
import os.path
from datasets import load_dataset
from datasets import Audio
from transformers import pipeline, AutomaticSpeechRecognitionPipeline
cv13 = load_dataset(
"mozilla-foundation/common_voice_13_0",
"eo",
split="train[:10]",
)
print(cv13[0])
cv13 = cv13.cast_column("audio", Audio(sampling_rate=16000))
sampling_rate = cv13.features["audio"].sampling_rate
audio_file = cv13[0]["audio"]["path"]
d, n = os.path.split(audio_file)
audio_file = os.path.join(d, "eo_train_0", n)
print(audio_file)
transcriber: AutomaticSpeechRecognitionPipeline = pipeline(
"automatic-speech-recognition",
model="xekri/wav2vec2-common_voice_13_0-eo-demo2",
)
print(transcriber(audio_file))
```
Output:
```
Found cached dataset common_voice_13_0 (C:/Users/rober/.cache/huggingface/datasets/mozilla-foundation___common_voice_13_0/eo/13.0.0/22809012aac1fc9803eaffc44122e4149043748e93933935d5ea19898587e4d7)
{'client_id': 'b8c51543fe043c8f27d0de0428e060e309d9d824ac9ad33e40aba7062dafd99e2e87bbedc671007e31973afb599b1c290dbd922637b79132727b5f37bc1ee88e', 'path': 'C:\\Users\\rober\\.cache\\huggingface\\datasets\\downloads\\extracted\\1dea8f044902d398c6cb09bfb5629dc2fbd80a6309ddd435c4554fa38f730472\\common_voice_eo_20453647.mp3', 'audio': {'path': 'C:\\Users\\rober\\.cache\\huggingface\\datasets\\downloads\\extracted\\1dea8f044902d398c6cb09bfb5629dc2fbd80a6309ddd435c4554fa38f730472\\common_voice_eo_20453647.mp3', 'array': array([ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
-1.16407300e-11, 1.07661449e-12, -1.71219774e-11]), 'sampling_rate': 48000}, 'sentence': 'Ĉu ili tiel plaĉas al vi?', 'up_votes': 2, 'down_votes': 0, 'age': 'twenties', 'gender': 'male', 'accent': 'Internacia', 'locale': 'eo', 'segment': '', 'variant': ''}
C:\Users\rober\.cache\huggingface\datasets\downloads\extracted\1dea8f044902d398c6cb09bfb5629dc2fbd80a6309ddd435c4554fa38f730472\eo_train_0\common_voice_eo_20453647.mp3
Downloading (…)lve/main/config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.27k/2.27k [00:00<?, ?B/s]
F:\eo-reco\.env\Lib\site-packages\huggingface_hub\file_download.py:133: UserWarning: `huggingface_hub` cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in C:\Users\rober\.cache\huggingface\hub. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting the `HF_HUB_DISABLE_SYMLINKS_WARNING` environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations.
To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to see activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development
warnings.warn(message)
Downloading pytorch_model.bin: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.26G/1.26G [01:56<00:00, 10.8MB/s]
Traceback (most recent call last):
File "F:\eo-reco\infer.py", line 20, in <module>
transcriber: AutomaticSpeechRecognitionPipeline = pipeline(
^^^^^^^^^
File "F:\eo-reco\.env\Lib\site-packages\transformers\pipelines\__init__.py", line 876, in pipeline
tokenizer = AutoTokenizer.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\eo-reco\.env\Lib\site-packages\transformers\models\auto\tokenization_auto.py", line 723, in from_pretrained
return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\eo-reco\.env\Lib\site-packages\transformers\tokenization_utils_base.py", line 1795, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for 'xekri/wav2vec2-common_voice_13_0-eo-demo2'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'xekri/wav2vec2-common_voice_13_0-eo-demo2' is the correct path to a directory containing all relevant files for a Wav2Vec2CTCTokenizer tokenizer.
```
Checking the uploaded repo, it seems that no tokenizer-related files (e.g. `vocab.json`, `tokenizer_config.json`, etc) were pushed.
I added some debug to `run_speech_recognition_ctc.py` and found that these files were generated locally, but got deleted locally during step 7 when `Trainer` was initialized (line 701).
The output from `run_speech_recognition_ctc.py` at that point was:
```
loading file vocab.json
loading file tokenizer_config.json
loading file added_tokens.json
loading file special_tokens_map.json
Adding <s> to the vocabulary
Adding </s> to the vocabulary
Cloning https://huggingface.co/xekri/wav2vec2-common_voice_13_0-eo-demo into local empty directory.
05/08/2023 15:06:23 - WARNING - huggingface_hub.repository - Cloning https://huggingface.co/xekri/wav2vec2-common_voice_13_0-eo-demo into local empty directory.
max_steps is given, it will override any value given in num_train_epochs
```
It seems that instantiating `Training` with `push_to_hub=true` creates a new repo and then empties anything in the local directory so that it can clone the repo into it. This deletes any files written to the local directory, which includes the tokenizer configs.
### Expected behavior
No error.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23222/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23222/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23221
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23221/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23221/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23221/events
|
https://github.com/huggingface/transformers/issues/23221
| 1,701,046,675 |
I_kwDOCUB6oc5lY-mT
| 23,221 |
T5 working on cpu but not gpu
|
{
"login": "mystsec",
"id": 122738547,
"node_id": "U_kgDOB1DXcw",
"avatar_url": "https://avatars.githubusercontent.com/u/122738547?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mystsec",
"html_url": "https://github.com/mystsec",
"followers_url": "https://api.github.com/users/mystsec/followers",
"following_url": "https://api.github.com/users/mystsec/following{/other_user}",
"gists_url": "https://api.github.com/users/mystsec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mystsec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mystsec/subscriptions",
"organizations_url": "https://api.github.com/users/mystsec/orgs",
"repos_url": "https://api.github.com/users/mystsec/repos",
"events_url": "https://api.github.com/users/mystsec/events{/privacy}",
"received_events_url": "https://api.github.com/users/mystsec/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @mystsec, thanks for raising this issue! \r\n\r\nVersion 4.16.2 is over a year old and since then there have been a lot of updates to our generation code. I'm able to run the example provide on the most recent release of transformers - v4.28.1\r\n\r\n",
"@amyeroberts I updated transformers to 4.28.1, and now I get the following warning + similar error to earlier:\r\n\r\n```\r\n/home/user/.local/lib/python3.10/site-packages/transformers/models/t5/tokenization_t5.py:163: FutureWarning: This tokenizer was incorrectly instantiated with a model max length of 512 which will be corrected in Transformers v5.\r\nFor now, this behavior is kept to avoid breaking backwards compatibility when padding/encoding with `truncation is True`.\r\n- Be aware that you SHOULD NOT rely on t5-large automatically truncating your input to 512 when padding/encoding.\r\n- If you want to encode/pad to sequences longer than 512 you can either instantiate this tokenizer with `model_max_length` or pass `max_length` when encoding/padding.\r\n- To avoid this warning, please instantiate this tokenizer with `model_max_length` set to your preferred value.\r\n warnings.warn(\r\nTraceback (most recent call last):\r\n File \"/home/user/testing/summary.py\", line 81, in <module>\r\n outputs = model.generate(\r\n File \"/home/user/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/home/user/.local/lib/python3.10/site-packages/transformers/generation/utils.py\", line 1524, in generate\r\n return self.beam_search(\r\n File \"/home/user/.local/lib/python3.10/site-packages/transformers/generation/utils.py\", line 2897, in beam_search\r\n sequence_outputs = beam_scorer.finalize(\r\n File \"/home/user/.local/lib/python3.10/site-packages/transformers/generation/beam_search.py\", line 360, in finalize\r\n decoded: torch.LongTensor = input_ids.new(batch_size * self.num_beam_hyps_to_keep, sent_max_len)\r\nRuntimeError: Trying to create tensor with negative dimension -36028792732385279: [1, -36028792732385279]\r\n```",
"@mystsec Could you try doing a fresh install of transformers in your environment? I'm unable to replicate the error with transformers 4.28.1 on both cpu and gpu.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,683 | 1,686 | 1,686 |
NONE
| null |
### System Info
transformers 4.16.2
ubuntu 22.04
python 3.10.6
gpu = amd radeon vii
torch 2.0.1 + rocm 5.4.2
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am playing around with summarization, and the following code works fine when `device = torch.device("cpu")`, but when I try on cuda I get the error below.
```
model = T5ForConditionalGeneration.from_pretrained("t5-large")
device = torch.device("cuda")
model = model.to(device)
tokenizer = T5Tokenizer.from_pretrained("t5-large")
inputs = tokenizer.encode("summarize: " + text, return_tensors="pt", max_length=512, truncation=True).to(device)
outputs = model.generate(
inputs,
max_length=150,
min_length=40,
length_penalty=2.0,
num_beams=4,
early_stopping=True)
print(tokenizer.decode(outputs[0]))
```
```
Traceback (most recent call last):
File "/home/user/testing/summary.py", line 81, in <module>
outputs = model.generate(
File "/home/user/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/user/.local/lib/python3.10/site-packages/transformers/generation_utils.py", line 1234, in generate
return self.beam_search(
File "/home/user/.local/lib/python3.10/site-packages/transformers/generation_utils.py", line 2026, in beam_search
beam_outputs = beam_scorer.process(
File "/home/user/.local/lib/python3.10/site-packages/transformers/generation_beam_search.py", line 257, in process
input_ids[batch_beam_idx].clone(),
IndexError: index -18014394218708992 is out of bounds for dimension 0 with size 4
```
### Expected behavior
When running on cpu, the code runs without errors and prints the output. I am trying to get the same results with gpu.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23221/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23221/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23220
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23220/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23220/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23220/events
|
https://github.com/huggingface/transformers/pull/23220
| 1,700,972,961 |
PR_kwDOCUB6oc5QCeWW
| 23,220 |
Pin tensorflow-probability
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,683 | 1,683 | 1,683 |
COLLABORATOR
| null |
# What does this PR do?
All is said in the title. Latest release requires TensorFlow>=2.12 which we don't support (not sure why, it's been a month and a half).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23220/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23220/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23220",
"html_url": "https://github.com/huggingface/transformers/pull/23220",
"diff_url": "https://github.com/huggingface/transformers/pull/23220.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23220.patch",
"merged_at": 1683585383000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23219
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23219/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23219/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23219/events
|
https://github.com/huggingface/transformers/issues/23219
| 1,700,886,843 |
I_kwDOCUB6oc5lYXk7
| 23,219 |
ValueError: DistilBertModel does not support gradient checkpointing.
|
{
"login": "sachinya00",
"id": 45940252,
"node_id": "MDQ6VXNlcjQ1OTQwMjUy",
"avatar_url": "https://avatars.githubusercontent.com/u/45940252?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachinya00",
"html_url": "https://github.com/sachinya00",
"followers_url": "https://api.github.com/users/sachinya00/followers",
"following_url": "https://api.github.com/users/sachinya00/following{/other_user}",
"gists_url": "https://api.github.com/users/sachinya00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachinya00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachinya00/subscriptions",
"organizations_url": "https://api.github.com/users/sachinya00/orgs",
"repos_url": "https://api.github.com/users/sachinya00/repos",
"events_url": "https://api.github.com/users/sachinya00/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachinya00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Yes DistilBert does not support gradient checkpointing. DistilBERT is a small model, so that feature is not needed for it.",
"I want to run this model across large batch sizes to see how much I can benefit from this. Is there any way I can enable for this model as well using torch.utils.checkpoint.checkpoint, but not sure where to apply checkpointing for this. ",
"I want to try out DeepSpeed’s activation checkpointing but can't use this on above model as it requires to enable the \"gradient_checkpointing\" flag in the HF trainer.\r\nI was going through the DeepSpeed details on the below page and it mentions that we've to enable the \"gradient_checkpointing\" flag in HF trainer to use this \r\n\r\n\"**HF Transformers models don’t know anything about DeepSpeed’s activation checkpointing,** so if you try to enable that feature in the DeepSpeed config file, nothing will happen.\" \r\n(https://huggingface.co/docs/transformers/main_classes/deepspeed)\r\n\r\nWhat code changes required I need to do to replace with the Deepspeed API or enable the model.gradient_checkpointing_enable() for distilbert\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"> Yes DistilBert does not support gradient checkpointing. DistilBERT is a small model, so that feature is not needed for it.\r\n\r\nI don't think this feature is redundant if we want to train it with extremely large batch size. To my knowledge, minilm, which has fewer parameters than distilbert, supports gradient checkpointing.",
"Yeah, I agree jordan. When I tried to compare some transformer models, I could not train DistilBert because of the large batch size while i could train bert/roberta."
] | 1,683 | 1,688 | 1,686 |
NONE
| null |
How to enable the "gradient_checkpointing" for DistilBert model ? However, it's working fine for the Bert model, I've followed the steps given on this page to enable it
https://huggingface.co/docs/transformers/v4.18.0/en/performance
I've gone through the huggingface code of respective classes and found that the feature is present only for the Bert model and not the DistilBert.
https://github.com/huggingface/transformers/blob/188a8bfcccc6b862fe7ccc2859d977c01dd98136/src/transformers/models/bert/modeling_bert.py#L593
https://github.com/huggingface/transformers/blob/188a8bfcccc6b862fe7ccc2859d977c01dd98136/src/transformers/models/distilbert/modeling_distilbert.py#L470
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23219/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23219/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23218
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23218/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23218/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23218/events
|
https://github.com/huggingface/transformers/issues/23218
| 1,700,832,543 |
I_kwDOCUB6oc5lYKUf
| 23,218 |
Model outputs are impacted by the aspect ratios of other images in a batch
|
{
"login": "rstebbing",
"id": 1795726,
"node_id": "MDQ6VXNlcjE3OTU3MjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1795726?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rstebbing",
"html_url": "https://github.com/rstebbing",
"followers_url": "https://api.github.com/users/rstebbing/followers",
"following_url": "https://api.github.com/users/rstebbing/following{/other_user}",
"gists_url": "https://api.github.com/users/rstebbing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rstebbing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rstebbing/subscriptions",
"organizations_url": "https://api.github.com/users/rstebbing/orgs",
"repos_url": "https://api.github.com/users/rstebbing/repos",
"events_url": "https://api.github.com/users/rstebbing/events{/privacy}",
"received_events_url": "https://api.github.com/users/rstebbing/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi @rstebbing, \r\n\r\nIndeed, this is a pretty tricky issue. You're understanding of the image processor and model matches mine :) \r\n\r\nIt seems that the effect of batch size is something the authors were aware of: https://github.com/facebookresearch/detr#evaluation, although they don't specify why e.g. the influence of layer norm. \r\n\r\ncc @rafaelpadilla Who has also been investing some of the influences of batch size on object detection metrics and came across the same issue. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I'm surprised to see this closed, but also appreciate the resolution isn't super straightforward."
] | 1,683 | 1,692 | 1,692 |
NONE
| null |
### System Info
- `transformers` version: 4.27.4
- Platform: macOS-13.3.1-arm64-arm-64bit
- Python version: 3.10.11
- Huggingface_hub version: 0.13.3
- PyTorch version (GPU?): 1.13.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@amyeroberts @NielsRogge
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I have been experimenting with `DetrForObjectDetection` and discovered an issue where the model output for a given image depends on the aspect ratio of the other images in the batch.
A reproducible example is given below:
``` python
import io
import requests
import torch
from PIL import Image
from transformers import DetrForObjectDetection, DetrImageProcessor
def main():
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
print(f"{url = }")
with requests.Session() as session:
image_bytes = session.get(url).content
image = Image.open(io.BytesIO(image_bytes))
print(f"{image.size = }")
pretrained_model_name = "facebook/detr-resnet-50"
print(f"{pretrained_model_name = }")
image_processor = DetrImageProcessor.from_pretrained(pretrained_model_name)
assert isinstance(image_processor, DetrImageProcessor)
model = DetrForObjectDetection.from_pretrained(pretrained_model_name)
assert isinstance(model, DetrForObjectDetection)
for images_expr, images in [
(
"[image]",
[image],
),
(
"[image, image]",
[image, image],
),
(
"[image, image.resize((image.width, image.height * 2))]",
[image, image.resize((image.width, image.height * 2))],
),
]:
print(f"images = {images_expr}")
inputs = image_processor(images=images, return_tensors="pt")
assert sorted(inputs) == ["pixel_mask", "pixel_values"]
pixel_mask, pixel_values = inputs["pixel_mask"], inputs["pixel_values"]
print(f" {pixel_mask.shape = }, {pixel_values.shape = }")
with torch.no_grad():
outputs = model(
pixel_mask=pixel_mask,
pixel_values=pixel_values,
)
print(f" {outputs.encoder_last_hidden_state.shape = }")
print(f" {outputs.encoder_last_hidden_state[0, 0, :8] = }")
if __name__ == "__main__":
main()
```
``` text
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image.size = (640, 480)
pretrained_model_name = 'facebook/detr-resnet-50'
images = [image]
pixel_mask.shape = torch.Size([1, 800, 1066]), pixel_values.shape = torch.Size([1, 3, 800, 1066])
outputs.encoder_last_hidden_state.shape = torch.Size([1, 850, 256])
outputs.encoder_last_hidden_state[0, 0, :8] = tensor([-0.0544, -0.0425, -0.0307, -0.0107, 0.0201, -0.1194, 0.0373, 0.0250])
images = [image, image]
pixel_mask.shape = torch.Size([2, 800, 1066]), pixel_values.shape = torch.Size([2, 3, 800, 1066])
outputs.encoder_last_hidden_state.shape = torch.Size([2, 850, 256])
outputs.encoder_last_hidden_state[0, 0, :8] = tensor([-0.0544, -0.0425, -0.0307, -0.0107, 0.0201, -0.1194, 0.0373, 0.0250])
images = [image, image.resize((image.width, image.height * 2))]
pixel_mask.shape = torch.Size([2, 1200, 1066]), pixel_values.shape = torch.Size([2, 3, 1200, 1066])
outputs.encoder_last_hidden_state.shape = torch.Size([2, 1292, 256])
outputs.encoder_last_hidden_state[0, 0, :8] = tensor([-0.0399, -0.0472, -0.0268, -0.0136, 0.0196, -0.1215, 0.0678, 0.0230])
```
The issue is the last line: the output of the last layer of the encoder is different for the first image in the batch.
Here is my understanding so far of how the issue arises:
- The `image_processor` resizes all images to be as large as possible, subject to the shortest edge being less than or equal to `800` and the longest edge being less than or equal to `1333`.
- To combine images of different aspect ratios in the same batch, images are padded with zeros at the bottom and right.
- The pixel values and pixel mask are forwarded through `DetrForObjectDetection` and all the way to the `DetrEncoder`, which then forwards _only_ the pixel values to the backbone (see [here](https://github.com/huggingface/transformers/blob/94056b57beb4499f4f74d5d88a41e8266cc01778/src/transformers/models/detr/modeling_detr.py#L372)).
- If an image is padded with zeros then it is OK to omit the pixel mask if zeros are preserved by the layers (e.g. a `Conv2D` layer). However, in this case, the backbone has batch normalization layers that add values too. The result of this is that the padding pixels get non-zero values which then influence downstream convolutions.
### Expected behavior
If two images are included in a single batch, the model output should be identical to as if the two images were evaluated in separate batches of size one.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23218/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23218/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23217
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23217/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23217/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23217/events
|
https://github.com/huggingface/transformers/pull/23217
| 1,700,799,307 |
PR_kwDOCUB6oc5QB3yn
| 23,217 |
Paged Optimizer + Lion Optimizer for Trainer
|
{
"login": "TimDettmers",
"id": 5260050,
"node_id": "MDQ6VXNlcjUyNjAwNTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5260050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TimDettmers",
"html_url": "https://github.com/TimDettmers",
"followers_url": "https://api.github.com/users/TimDettmers/followers",
"following_url": "https://api.github.com/users/TimDettmers/following{/other_user}",
"gists_url": "https://api.github.com/users/TimDettmers/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TimDettmers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TimDettmers/subscriptions",
"organizations_url": "https://api.github.com/users/TimDettmers/orgs",
"repos_url": "https://api.github.com/users/TimDettmers/repos",
"events_url": "https://api.github.com/users/TimDettmers/events{/privacy}",
"received_events_url": "https://api.github.com/users/TimDettmers/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This feature truely needed, does there any timeline on when will new release of bitesandbytes?"
] | 1,683 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR introduces one new optimizer (Lion) and one new feature from bitsandbytes (paged optimizers) for the trainer `--optim` variable.
Paged optimizers is an idea that will be published in an upcoming paper where we fine-tune 65B models on a single GPU. Paged optimizers use as much GPU memory as is available, but if less GPU memory is available they automatically switch to a page-by-page transfer mode between the CPU and GPU and transfer just the optimizer states that are needed right now to perform the parameter updates. As such, they are similar to optimizers that are offloaded but Paged optimizer work well with as little as 2 MB of GPU memory and require no user interaction, no extra code, and are failsafe (you cannot do the allocation wrong). If more memory is available, they behave just like any other optimizer -- there is no difference in behavior or performance.
Paged optimizers are particularly useful for training with variable length mini-batches/sequences: if the model fits in the GPU RAM for most mini-batches and hits a mini-batch with very large context/sequence size, then the optimizer will be evicted to the CPU temporarily. Normal optimization resumes after the large mini-batch.
Since these transfers happen page-by-page and the entire system is automatic, the user does not need to do anything for memory benefits and performance considerations.
The only thing that is necessary to use paged optimizer is to pass the specific argument to the trainer: `--optim paged_adamw_32bit` or `--optim paged_lion_32bit` use standard 32-bit AdamW or Lion that are paged.
More details on the algorithm. Paged optimizers work like this:
1. Optimizer states are allocated on the CPU and mapped to a certain GPU device.
2. When an `optimizer.step()` is performed bitsandbytes prefetches the GPU memory page-by-page from the CPU buffer, thus only needing 2MB of GPU memory to perform the optimizer update. If more memory is available, then the swapped-in pages will stay in memory until ...
3. In the case new GPU memory is allocated which exceeds the total GPU RAM capacity, for example, your GPU has 11 GB of RAM and 10.5 GB are used already, and PyTorch allocates another 2 GB of tensors, then the GPU pages for the optimizer are evicted unto the CPU. This happens automatically without user interaction. As such, an out-of-memory event is prevented automatically.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## About the implementation / Discussion
I added tests similar to those from 8-bit Adam. I refactored all bnb optimizers into one section of the trainer to reduce bloat.
Reviewers: @sgugger @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23217/reactions",
"total_count": 9,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 9,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23217/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23217",
"html_url": "https://github.com/huggingface/transformers/pull/23217",
"diff_url": "https://github.com/huggingface/transformers/pull/23217.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23217.patch",
"merged_at": 1684925608000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23216
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23216/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23216/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23216/events
|
https://github.com/huggingface/transformers/pull/23216
| 1,700,723,079 |
PR_kwDOCUB6oc5QBnIl
| 23,216 |
docs: Fix broken link in 'How to add a model...'
|
{
"login": "connor-henderson",
"id": 78612354,
"node_id": "MDQ6VXNlcjc4NjEyMzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/78612354?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/connor-henderson",
"html_url": "https://github.com/connor-henderson",
"followers_url": "https://api.github.com/users/connor-henderson/followers",
"following_url": "https://api.github.com/users/connor-henderson/following{/other_user}",
"gists_url": "https://api.github.com/users/connor-henderson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/connor-henderson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/connor-henderson/subscriptions",
"organizations_url": "https://api.github.com/users/connor-henderson/orgs",
"repos_url": "https://api.github.com/users/connor-henderson/repos",
"events_url": "https://api.github.com/users/connor-henderson/events{/privacy}",
"received_events_url": "https://api.github.com/users/connor-henderson/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,683 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
# What does this PR do?
See https://huggingface.co/docs/transformers/add_new_model#run-a-pretrained-checkpoint-using-the-original-repository:~:text=Get%20familiar%20with%20the%20original%20repository
No issue filed
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23216/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23216/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23216",
"html_url": "https://github.com/huggingface/transformers/pull/23216",
"diff_url": "https://github.com/huggingface/transformers/pull/23216.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23216.patch",
"merged_at": 1683572202000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23215
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23215/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23215/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23215/events
|
https://github.com/huggingface/transformers/issues/23215
| 1,700,720,280 |
I_kwDOCUB6oc5lXu6Y
| 23,215 |
transformers.set_seed seems to do nothing
|
{
"login": "mojejmenojehonza",
"id": 127604251,
"node_id": "U_kgDOB5sWGw",
"avatar_url": "https://avatars.githubusercontent.com/u/127604251?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mojejmenojehonza",
"html_url": "https://github.com/mojejmenojehonza",
"followers_url": "https://api.github.com/users/mojejmenojehonza/followers",
"following_url": "https://api.github.com/users/mojejmenojehonza/following{/other_user}",
"gists_url": "https://api.github.com/users/mojejmenojehonza/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mojejmenojehonza/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mojejmenojehonza/subscriptions",
"organizations_url": "https://api.github.com/users/mojejmenojehonza/orgs",
"repos_url": "https://api.github.com/users/mojejmenojehonza/repos",
"events_url": "https://api.github.com/users/mojejmenojehonza/events{/privacy}",
"received_events_url": "https://api.github.com/users/mojejmenojehonza/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@mojejmenojehonza 👋\r\n\r\nTwo notes:\r\n1. You should pass `do_sample=True` in your generation config or in your `.generate()` call. Most models have it off by default, causing the generation to be deterministic (and ignoring parameters like `temperature`, `top_k`, etc).\r\n2. With `temperature=0.2`, the relative weight of the most likely logits is massively increased, making generation almost deterministic. Even if there are no bugs in your script, it's far from guaranteed that two different seeds produce different outputs with such low temperature :)",
"@gante\r\nThanks worked like a charm :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,683 | 1,686 | 1,686 |
NONE
| null |
### System Info
- `transformers` version: 4.28.0.dev0
- Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.10.10
- Huggingface_hub version: 0.13.3
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@gante, @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. install transformes with support for alpaca model
2. run this code with one seed
3. run this code with any other seed
4. see that the results are the same
```
from transformers import GenerationConfig, LlamaTokenizer, LlamaForCausalLM, set_seed
from torch import float16, compile, no_grad
set_seed(621)
# Enhances prompt
def enhance_prompt(prompt, input=None):
if input:
return f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Input:
{input}
### Response:"""
else:
return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:"""
# Gets response from Alpaca
def get_response(prompt):
with no_grad():
outputs = alpaca.generate(input_ids=tokenizer(prompt, return_tensors="pt").input_ids.to("cuda"), generation_config=generation_config, return_dict_in_generate=True, output_scores=True)
outputs = tokenizer.decode(outputs.sequences[0], skip_special_tokens=True)
return outputs.split("### Response:")[1]
# Sets up Alpaca
tokenizer = LlamaTokenizer.from_pretrained("chainyo/alpaca-lora-7b")
alpaca = LlamaForCausalLM.from_pretrained("chainyo/alpaca-lora-7b", load_in_8bit=True, torch_dtype=float16, device_map="auto")
generation_config = GenerationConfig(temperature=0.2, top_p=0.75, top_k=40, num_beams=4, max_new_tokens=64)
alpaca.eval()
compile(alpaca)
# Gets output from Alpaca
prompt = enhance_prompt("Write a simple poem about flowers.")
out = get_response(prompt)
# Prints Alpaca's output
print(out)
```
### Expected behavior
Model will output two different answers but now it gives the same every seed I try.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23215/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23215/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23214
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23214/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23214/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23214/events
|
https://github.com/huggingface/transformers/pull/23214
| 1,700,667,001 |
PR_kwDOCUB6oc5QBa59
| 23,214 |
Transformers Agents
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"picture = agent.run(\"Draw me a picture of rivers and lakes\")\r\n==Explanation from the agent==\r\nI will use the following tool: `image_segmenter` to generate a segmentation mask for the image.\r\n\r\n\r\n==Code generated by the agent==\r\nprompt = \"rivers and lakes\"\r\nmask = image_segmenter(image, prompt)\r\n\r\n\r\n==Result==\r\nEvaluation of the code stopped at line 1 before the end because of the following error:\r\nThe variable `image` is not defined.",
"Ah yes we did that example with openAI. Will fine-tune the prompt so that example works before the release, thanks for the pointer!"
] | 1,683 | 1,683 | 1,683 |
COLLABORATOR
| null |
# Introducing Transformers Agents
This PR adds a new API called Transformers Agents. Agents allow you to use Transformers with zero code experience, directly talking to Transformers or Diffusers via natural language. It is based on `Agent`s and `Tool`s. The agent is an LLM prompted to generate code using the tools, which are simple functions performing a single task.
Tools can live in Transformers or on the Hub, this PR introduces both. You can read more about this in the added documentation but here is an example:
Define an agent using the starcoder model:
```py
from transformers import HfAgent
agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder")
```
Use the command `run` to execute a given problem:
```py
agent.run("Draw me a picture of rivers and lakes")
```

Use the command `chat` to chat with the agent and execute instructions one after the other:
```py
agent.chat("Draw me a picture of rivers and lakes")
```

```py
agent.chat("Transform the picture so that there is a rock in there")
```

|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23214/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 3,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23214/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23214",
"html_url": "https://github.com/huggingface/transformers/pull/23214",
"diff_url": "https://github.com/huggingface/transformers/pull/23214.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23214.patch",
"merged_at": 1683679077000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23213
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23213/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23213/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23213/events
|
https://github.com/huggingface/transformers/issues/23213
| 1,700,601,162 |
I_kwDOCUB6oc5lXR1K
| 23,213 |
Question about resum_from_checkpoint in run_translation_no_trainer.py
|
{
"login": "danielDigitalArt",
"id": 25100967,
"node_id": "MDQ6VXNlcjI1MTAwOTY3",
"avatar_url": "https://avatars.githubusercontent.com/u/25100967?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danielDigitalArt",
"html_url": "https://github.com/danielDigitalArt",
"followers_url": "https://api.github.com/users/danielDigitalArt/followers",
"following_url": "https://api.github.com/users/danielDigitalArt/following{/other_user}",
"gists_url": "https://api.github.com/users/danielDigitalArt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danielDigitalArt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danielDigitalArt/subscriptions",
"organizations_url": "https://api.github.com/users/danielDigitalArt/orgs",
"repos_url": "https://api.github.com/users/danielDigitalArt/repos",
"events_url": "https://api.github.com/users/danielDigitalArt/events{/privacy}",
"received_events_url": "https://api.github.com/users/danielDigitalArt/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @muellerzr ",
"@danielDigitalArt it's not needed unless we resume from a checkpoint, yes. If you would like to open a PR with your suggestions, that would be welcome as they do make sense to me as an inclusion. Very keen observation",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,683 | 1,691 | 1,691 |
NONE
| null |
### System Info
- `transformers` version: 4.29.0.dev0
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.10.6
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 1.12.1+cpu (False)
- Tensorflow version (GPU?): 2.10.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
-
### Expected behavior
Hi, i stumbled across this function, while i was debugging my own code which is a changed version of the no_trainer version.
While my code crashes with a cuda error, i found this part interesting as it would help me in getting faster to the evaluation process (where my error occurs).
However it seems a bit weird as i was reading this function
[https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/run_translation_no_trainer.py#LL594C4-L614C66](https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/run_translation_no_trainer.py#LL594C4-L614C66)
What seems a bit wird to me is, that first "resume_from_checkpoint" could be either None (default) or a checkpoint like "epoch_5" or "step_1000". The parameter Definition says it should be of Type String or daults to None [https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/run_translation_no_trainer.py#L266-L271](https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/run_translation_no_trainer.py#L266-L271)
The first check is if the argument was set, to go into the resume block.
```
if args.resume_from_checkpoint:
```
From my understanding the following values would work:
"step_1234" or any other string,
"" an empty string.
True
Now the next check will look if it is not none or not empty
```
if args.resume_from_checkpoint is not None or args.resume_from_checkpoint != "":
```
So everything in the if block will be executed in these instances:
is: True/False
is: "step_1000" or any other non empty string.
But if it is True, the later part can not work here as it searches either for "epoch_*" or "step_*", in one of my testruns it whowed when using True it will search for the folder "True" which then fails. While it might be perfectly finde to set it to "" i also found some examples about using the normal trainer, where resume was simply set to true like so:
```trainer.train(resume_from_checkpoint = True)```
My suggestion here would be to first check if it is not None, then if it is not True:
```
if args.resume_from_checkpoint:
args.resume_from_checkpoint != "" and args.resume_from_checkpoint is not True
```
So if it is set to True or an empty string, it will search for a folder with either "step_" or "epoch_" as a name and use this folder.
Another difficulty i found in my project was, that i wanted to save the checkpoints into a subfolder, saving it was not the problem but loading it here, because the script will only replace "epoch_" or "step_" to fetch the step or epoch number. In Order to find the folders in the subfolder i changed the check like so:
```
dirs = [os.path.join(args.checkpoint_dir, f.name) for f in os.scandir(f"{args.checkpoint_dir}") if f.is_dir()]
...
import re
# First the correct epoch is detected here, later the process will skip training until reaching the correct step.
if "epoch" in training_difference:
repl = re.search(r'epoch_(\d+)', training_difference).group()
starting_epoch = int(repl.replace("epoch_", "")) + 1
```
Same check would have to be done in the "step_" part.
Maybe the args.checkpoint_dir can be omitted, so the user can specify the path when setting the argument (True or "" would then not work anymore)
At last i have a question about this part:
```
accelerator.load_state(checkpoint)
```
This is only defined in the block when ```args.resume_from_checkpoint``` is not empty, is it not needed to let the accelerator always load the appropriate checkpoint? In the example it would not use accelerator.load_state when loading it from the latest chekpoint which it discovered by itself.
Edit:
fixed file links
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23213/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23213/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23212
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23212/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23212/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23212/events
|
https://github.com/huggingface/transformers/pull/23212
| 1,700,543,414 |
PR_kwDOCUB6oc5QBAej
| 23,212 |
Unit tests for hf_argparser
|
{
"login": "RobertBaruch",
"id": 1783950,
"node_id": "MDQ6VXNlcjE3ODM5NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1783950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RobertBaruch",
"html_url": "https://github.com/RobertBaruch",
"followers_url": "https://api.github.com/users/RobertBaruch/followers",
"following_url": "https://api.github.com/users/RobertBaruch/following{/other_user}",
"gists_url": "https://api.github.com/users/RobertBaruch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RobertBaruch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RobertBaruch/subscriptions",
"organizations_url": "https://api.github.com/users/RobertBaruch/orgs",
"repos_url": "https://api.github.com/users/RobertBaruch/repos",
"events_url": "https://api.github.com/users/RobertBaruch/events{/privacy}",
"received_events_url": "https://api.github.com/users/RobertBaruch/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Oh, I didn't see that file. Will retract request.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23212). All of your documentation changes will be reflected on that endpoint."
] | 1,683 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
# What does this PR do?
Adds unit tests for hf_argparser.py
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23212/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23212/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23212",
"html_url": "https://github.com/huggingface/transformers/pull/23212",
"diff_url": "https://github.com/huggingface/transformers/pull/23212.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23212.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23211
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23211/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23211/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23211/events
|
https://github.com/huggingface/transformers/pull/23211
| 1,700,512,478 |
PR_kwDOCUB6oc5QA5tI
| 23,211 |
Fix remote tool
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,683 | 1,683 | 1,683 |
MEMBER
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23211/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23211/timeline
| null | true |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23211",
"html_url": "https://github.com/huggingface/transformers/pull/23211",
"diff_url": "https://github.com/huggingface/transformers/pull/23211.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23211.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23210
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23210/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23210/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23210/events
|
https://github.com/huggingface/transformers/issues/23210
| 1,700,503,464 |
I_kwDOCUB6oc5lW5-o
| 23,210 |
Help with using gpt-neo models correctly
|
{
"login": "buttercutter",
"id": 3324659,
"node_id": "MDQ6VXNlcjMzMjQ2NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3324659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/buttercutter",
"html_url": "https://github.com/buttercutter",
"followers_url": "https://api.github.com/users/buttercutter/followers",
"following_url": "https://api.github.com/users/buttercutter/following{/other_user}",
"gists_url": "https://api.github.com/users/buttercutter/gists{/gist_id}",
"starred_url": "https://api.github.com/users/buttercutter/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/buttercutter/subscriptions",
"organizations_url": "https://api.github.com/users/buttercutter/orgs",
"repos_url": "https://api.github.com/users/buttercutter/repos",
"events_url": "https://api.github.com/users/buttercutter/events{/privacy}",
"received_events_url": "https://api.github.com/users/buttercutter/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Why `remove_columns=column_names` which will end up feeding nothing to the [nlp model training process](https://github.com/huggingface/transformers/blob/3335724376319a0c453049d0cd883504f530ff52/examples/research_projects/jax-projects/model_parallel/run_clm_mp.py#L356) ?\r\n\r\nFeel free to correct me if I miss anything or wrong.\r\n\r\n\r\n\r\nEdit: It seems that `remove_columns` is to remove tokenizer inputs away from the tokenized outputs.",
"```python\r\ninput_ids[0] = [ 82 6442 25 ... 50256 50256 50256]\r\n 0%| | 0[/104](https://vscode-remote+ssh-002dremote-002b35-002e238-002e156-002e174.vscode-resource.vscode-cdn.net/104) [00:01<?, ?it[/s](https://vscode-remote+ssh-002dremote-002b35-002e238-002e156-002e174.vscode-resource.vscode-cdn.net/s)]\r\n╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮\r\n│ in <module>:94 │\r\n│ │\r\n│ 91 │ │\r\n│ 92 │ # Generate the answer │\r\n│ 93 │ #Changing temperature, top_k and top_p does not seem to change the outcome │\r\n│ ❱ 94 │ outputs = model.generate( │\r\n│ 95 │ │ input_ids = eval_tokenized_dataset[\"input_ids\"][index][None, :], │\r\n│ 96 │ │ max_new_tokens=generated_max_length, │\r\n│ 97 │ │ pad_token_id = model.config.eos_token_id, │\r\n│ │\r\n│ /home/moe/.local/lib/python3.11/site-packages/transformers/generation/flax_utils.py:429 in │\r\n│ generate │\r\n│ │\r\n│ 426 │ │ │ ) │\r\n│ 427 │ │ elif generation_config.do_sample and generation_config.num_beams == 1: │\r\n│ 428 │ │ │ logits_warper = self._get_logits_warper(generation_config=generation_config) │\r\n│ ❱ 429 │ │ │ return self._sample( │\r\n│ 430 │ │ │ │ input_ids, │\r\n│ 431 │ │ │ │ generation_config.max_length, │\r\n│ 432 │ │ │ │ generation_config.pad_token_id, │\r\n│ │\r\n│ /home/moe/.local/lib/python3.11/site-packages/transformers/generation/flax_utils.py:682 in │\r\n│ _sample │\r\n│ │\r\n│ 679 │ │ model = self.decode if self.config.is_encoder_decoder else self │\r\n│ 680 │ │ │\r\n│ 681 │ │ # initialize model specific kwargs │\r\n│ ❱ 682 │ │ model_kwargs = self.prepare_inputs_for_generation(input_ids, max_length, **model │\r\n│ 683 │ │ │\r\n│ 684 │ │ # initialize state │\r\n│ 685 │ │ state = SampleState( │\r\n│ │\r\n│ /home/moe/.local/lib/python3.11/site-packages/transformers/models/gpt_neo/modeling_flax_gpt_neo. │\r\n│ py:661 in prepare_inputs_for_generation │\r\n│ │\r\n│ 658 │ │ # initializing the cache │\r\n│ 659 │ │ batch_size, seq_length = input_ids.shape │\r\n│ 660 │ │ │\r\n│ ❱ 661 │ │ past_key_values = self.init_cache(batch_size, max_length) │\r\n│ 662 │ │ # Note that usually one would have to put 0's in the attention_mask for x > inpu │\r\n│ 663 │ │ # But since GPTNeo uses a causal mask, those positions are masked anyways. │\r\n│ 664 │ │ # Thus we can create a single static attention_mask here, which is more efficien │\r\n│ │\r\n│ /home/moe/.local/lib/python3.11/site-packages/transformers/models/gpt_neo/modeling_flax_gpt_neo. │\r\n│ py:396 in init_cache │\r\n│ │\r\n│ 393 │ │ attention_mask = jnp.ones_like(input_ids) │\r\n│ 394 │ │ position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(input_ids).shape[-1]), │\r\n│ 395 │ │ │\r\n│ ❱ 396 │ │ init_variables = self.module.init( │\r\n│ 397 │ │ │ jax.random.PRNGKey(0), input_ids, attention_mask, position_ids, return_dict= │\r\n│ 398 │ │ ) │\r\n│ 399 │ │ return unfreeze(init_variables[\"cache\"]) │\r\n│ │\r\n│ /home/moe/.local/lib/python3.11/site-packages/jax/_src/traceback_util.py:166 in │\r\n│ reraise_with_filtered_traceback │\r\n│ │\r\n│ 163 def reraise_with_filtered_traceback(*args, **kwargs): │\r\n│ 164 │ __tracebackhide__ = True │\r\n│ 165 │ try: │\r\n│ ❱ 166 │ return fun(*args, **kwargs) │\r\n│ 167 │ except Exception as e: │\r\n│ 168 │ mode = _filtering_mode() │\r\n│ 169 │ if _is_under_reraiser(e) or mode == \"off\": │\r\n│ │\r\n│ /home/moe/.local/lib/python3.11/site-packages/flax/linen/module.py:1640 in init │\r\n│ │\r\n│ 1637 │ \"\"\" │\r\n│ 1638 │ Module._module_checks(self) │\r\n│ 1639 │ │\r\n│ ❱ 1640 │ _, v_out = self.init_with_output( │\r\n│ 1641 │ │ rngs, │\r\n│ 1642 │ │ *args, │\r\n│ 1643 │ │ method=method, │\r\n│ │\r\n│ /home/moe/.local/lib/python3.11/site-packages/jax/_src/traceback_util.py:166 in │\r\n│ reraise_with_filtered_traceback │\r\n│ │\r\n│ 163 def reraise_with_filtered_traceback(*args, **kwargs): │\r\n│ 164 │ __tracebackhide__ = True │\r\n│ 165 │ try: │\r\n│ ❱ 166 │ return fun(*args, **kwargs) │\r\n│ 167 │ except Exception as e: │\r\n│ 168 │ mode = _filtering_mode() │\r\n│ 169 │ if _is_under_reraiser(e) or mode == \"off\": │\r\n│ │\r\n│ /home/moe/.local/lib/python3.11/site-packages/flax/linen/module.py:1545 in init_with_output │\r\n│ │\r\n│ 1542 │ elif method is None: │\r\n│ 1543 │ method = self.__call__ │\r\n│ 1544 │ method = _get_unbound_fn(method) │\r\n│ ❱ 1545 │ return init_with_output( │\r\n│ 1546 │ │ method, │\r\n│ 1547 │ │ self, │\r\n│ 1548 │ │ mutable=mutable, │\r\n│ │\r\n│ /home/moe/.local/lib/python3.11/site-packages/flax/core/scope.py:965 in wrapper │\r\n│ │\r\n│ 962 │ if not isinstance(rngs, dict): │\r\n│ 963 │ rngs = {'params': rngs} │\r\n│ 964 │ init_flags = {**(flags if flags is not None else {}), 'initializing': True} │\r\n│ ❱ 965 │ return apply(fn, mutable=mutable, flags=init_flags)({}, *args, rngs=rngs, │\r\n│ 966 │ │ │ │ │ │ │ │ │ │ │ │ │ │ **kwargs) │\r\n│ 967 │\r\n│ 968 return wrapper │\r\n│ │\r\n│ /home/moe/.local/lib/python3.11/site-packages/flax/core/scope.py:933 in wrapper │\r\n│ │\r\n│ 930 │ │\r\n│ 931 │ with bind(variables, rngs=rngs, mutable=mutable, │\r\n│ 932 │ │ │ flags=flags).temporary() as root: │\r\n│ ❱ 933 │ y = fn(root, *args, **kwargs) │\r\n│ 934 │ if mutable is not False: │\r\n│ 935 │ return y, root.mutable_variables() │\r\n│ 936 │ else: │\r\n│ │\r\n│ /home/moe/.local/lib/python3.11/site-packages/flax/linen/module.py:2121 in scope_fn │\r\n│ │\r\n│ 2118 def scope_fn(scope, *args, **kwargs): │\r\n│ 2119 │ _context.capture_stack.append(capture_intermediates) │\r\n│ 2120 │ try: │\r\n│ ❱ 2121 │ return fn(module.clone(parent=scope), *args, **kwargs) │\r\n│ 2122 │ finally: │\r\n│ 2123 │ _context.capture_stack.pop() │\r\n│ 2124 │\r\n│ │\r\n│ /home/moe/.local/lib/python3.11/site-packages/flax/linen/module.py:432 in wrapped_module_method │\r\n│ │\r\n│ 429 │ # otherwise call the wrapped function as is. │\r\n│ 430 │ if args and isinstance(args[0], Module): │\r\n│ 431 │ self, args = args[0], args[1:] │\r\n│ ❱ 432 │ return self._call_wrapped_method(fun, args, kwargs) │\r\n│ 433 │ else: │\r\n│ 434 │ return fun(*args, **kwargs) │\r\n│ 435 wrapped_module_method.method_handler_wrapped = True # type: ignore[attr-defined] │\r\n│ │\r\n│ /home/moe/.local/lib/python3.11/site-packages/flax/linen/module.py:864 in _call_wrapped_method │\r\n│ │\r\n│ 861 │ # call method │\r\n│ 862 │ if _use_named_call: │\r\n│ 863 │ │ with jax.named_scope(_derive_profiling_name(self, fun)): │\r\n│ ❱ 864 │ │ y = fun(self, *args, **kwargs) │\r\n│ 865 │ else: │\r\n│ 866 │ │ y = fun(self, *args, **kwargs) │\r\n│ 867 │\r\n│ │\r\n│ /home/moe/.local/lib/python3.11/site-packages/transformers/models/gpt_neo/modeling_flax_gpt_neo. │\r\n│ py:622 in __call__ │\r\n│ │\r\n│ 619 │ │ output_hidden_states: bool = False, │\r\n│ 620 │ │ return_dict: bool = True, │\r\n│ 621 │ ): │\r\n│ ❱ 622 │ │ outputs = self.transformer( │\r\n│ 623 │ │ │ input_ids, │\r\n│ 624 │ │ │ attention_mask, │\r\n│ 625 │ │ │ position_ids, │\r\n│ │\r\n│ /home/moe/.local/lib/python3.11/site-packages/flax/linen/module.py:432 in wrapped_module_method │\r\n│ │\r\n│ 429 │ # otherwise call the wrapped function as is. │\r\n│ 430 │ if args and isinstance(args[0], Module): │\r\n│ 431 │ self, args = args[0], args[1:] │\r\n│ ❱ 432 │ return self._call_wrapped_method(fun, args, kwargs) │\r\n│ 433 │ else: │\r\n│ 434 │ return fun(*args, **kwargs) │\r\n│ 435 wrapped_module_method.method_handler_wrapped = True # type: ignore[attr-defined] │\r\n│ │\r\n│ /home/moe/.local/lib/python3.11/site-packages/flax/linen/module.py:864 in _call_wrapped_method │\r\n│ │\r\n│ 861 │ # call method │\r\n│ 862 │ if _use_named_call: │\r\n│ 863 │ │ with jax.named_scope(_derive_profiling_name(self, fun)): │\r\n│ ❱ 864 │ │ y = fun(self, *args, **kwargs) │\r\n│ 865 │ else: │\r\n│ 866 │ │ y = fun(self, *args, **kwargs) │\r\n│ 867 │\r\n│ │\r\n│ /home/moe/.local/lib/python3.11/site-packages/transformers/models/gpt_neo/modeling_flax_gpt_neo. │\r\n│ py:555 in __call__ │\r\n│ │\r\n│ 552 │ │ hidden_states = input_embeds + position_embeds │\r\n│ 553 │ │ hidden_states = self.dropout(hidden_states, deterministic=deterministic) │\r\n│ 554 │ │ │\r\n│ ❱ 555 │ │ outputs = self.h( │\r\n│ 556 │ │ │ hidden_states, │\r\n│ 557 │ │ │ attention_mask, │\r\n│ 558 │ │ │ deterministic=deterministic, │\r\n│ │\r\n│ /home/moe/.local/lib/python3.11/site-packages/flax/linen/module.py:432 in wrapped_module_method │\r\n│ │\r\n│ 429 │ # otherwise call the wrapped function as is. │\r\n│ 430 │ if args and isinstance(args[0], Module): │\r\n│ 431 │ self, args = args[0], args[1:] │\r\n│ ❱ 432 │ return self._call_wrapped_method(fun, args, kwargs) │\r\n│ 433 │ else: │\r\n│ 434 │ return fun(*args, **kwargs) │\r\n│ 435 wrapped_module_method.method_handler_wrapped = True # type: ignore[attr-defined] │\r\n│ │\r\n│ /home/moe/.local/lib/python3.11/site-packages/flax/linen/module.py:864 in _call_wrapped_method │\r\n│ │\r\n│ 861 │ # call method │\r\n│ 862 │ if _use_named_call: │\r\n│ 863 │ │ with jax.named_scope(_derive_profiling_name(self, fun)): │\r\n│ ❱ 864 │ │ y = fun(self, *args, **kwargs) │\r\n│ 865 │ else: │\r\n│ 866 │ │ y = fun(self, *args, **kwargs) │\r\n│ 867 │\r\n│ │\r\n│ /home/moe/.local/lib/python3.11/site-packages/transformers/models/gpt_neo/modeling_flax_gpt_neo. │\r\n│ py:499 in __call__ │\r\n│ │\r\n│ 496 │ │ │ if output_hidden_states: │\r\n│ 497 │ │ │ │ all_hidden_states += (hidden_states,) │\r\n│ 498 │ │ │ │\r\n│ ❱ 499 │ │ │ layer_outputs = block( │\r\n│ 500 │ │ │ │ hidden_states, │\r\n│ 501 │ │ │ │ attention_mask, │\r\n│ 502 │ │ │ │ deterministic=deterministic, │\r\n│ │\r\n│ /home/moe/.local/lib/python3.11/site-packages/flax/linen/module.py:432 in wrapped_module_method │\r\n│ │\r\n│ 429 │ # otherwise call the wrapped function as is. │\r\n│ 430 │ if args and isinstance(args[0], Module): │\r\n│ 431 │ self, args = args[0], args[1:] │\r\n│ ❱ 432 │ return self._call_wrapped_method(fun, args, kwargs) │\r\n│ 433 │ else: │\r\n│ 434 │ return fun(*args, **kwargs) │\r\n│ 435 wrapped_module_method.method_handler_wrapped = True # type: ignore[attr-defined] │\r\n│ │\r\n│ /home/moe/.local/lib/python3.11/site-packages/flax/linen/module.py:864 in _call_wrapped_method │\r\n│ │\r\n│ 861 │ # call method │\r\n│ 862 │ if _use_named_call: │\r\n│ 863 │ │ with jax.named_scope(_derive_profiling_name(self, fun)): │\r\n│ ❱ 864 │ │ y = fun(self, *args, **kwargs) │\r\n│ 865 │ else: │\r\n│ 866 │ │ y = fun(self, *args, **kwargs) │\r\n│ 867 │\r\n│ │\r\n│ /home/moe/.local/lib/python3.11/site-packages/transformers/models/gpt_neo/modeling_flax_gpt_neo. │\r\n│ py:320 in __call__ │\r\n│ │\r\n│ 317 │ ): │\r\n│ 318 │ │ residual = hidden_states │\r\n│ 319 │ │ hidden_states = self.ln_1(hidden_states) │\r\n│ ❱ 320 │ │ outputs = self.attn( │\r\n│ 321 │ │ │ hidden_states, │\r\n│ 322 │ │ │ attention_mask=attention_mask, │\r\n│ 323 │ │ │ deterministic=deterministic, │\r\n│ │\r\n│ /home/moe/.local/lib/python3.11/site-packages/flax/linen/module.py:432 in wrapped_module_method │\r\n│ │\r\n│ 429 │ # otherwise call the wrapped function as is. │\r\n│ 430 │ if args and isinstance(args[0], Module): │\r\n│ 431 │ self, args = args[0], args[1:] │\r\n│ ❱ 432 │ return self._call_wrapped_method(fun, args, kwargs) │\r\n│ 433 │ else: │\r\n│ 434 │ return fun(*args, **kwargs) │\r\n│ 435 wrapped_module_method.method_handler_wrapped = True # type: ignore[attr-defined] │\r\n│ │\r\n│ /home/moe/.local/lib/python3.11/site-packages/flax/linen/module.py:864 in _call_wrapped_method │\r\n│ │\r\n│ 861 │ # call method │\r\n│ 862 │ if _use_named_call: │\r\n│ 863 │ │ with jax.named_scope(_derive_profiling_name(self, fun)): │\r\n│ ❱ 864 │ │ y = fun(self, *args, **kwargs) │\r\n│ 865 │ else: │\r\n│ 866 │ │ y = fun(self, *args, **kwargs) │\r\n│ 867 │\r\n│ │\r\n│ /home/moe/.local/lib/python3.11/site-packages/transformers/models/gpt_neo/modeling_flax_gpt_neo. │\r\n│ py:266 in __call__ │\r\n│ │\r\n│ 263 │ │ init_cache: bool = False, │\r\n│ 264 │ │ output_attentions: bool = False, │\r\n│ 265 │ ): │\r\n│ ❱ 266 │ │ return self.attention( │\r\n│ 267 │ │ │ hidden_states, │\r\n│ 268 │ │ │ attention_mask=attention_mask, │\r\n│ 269 │ │ │ deterministic=deterministic, │\r\n│ │\r\n│ /home/moe/.local/lib/python3.11/site-packages/flax/linen/module.py:432 in wrapped_module_method │\r\n│ │\r\n│ 429 │ # otherwise call the wrapped function as is. │\r\n│ 430 │ if args and isinstance(args[0], Module): │\r\n│ 431 │ self, args = args[0], args[1:] │\r\n│ ❱ 432 │ return self._call_wrapped_method(fun, args, kwargs) │\r\n│ 433 │ else: │\r\n│ 434 │ return fun(*args, **kwargs) │\r\n│ 435 wrapped_module_method.method_handler_wrapped = True # type: ignore[attr-defined] │\r\n│ │\r\n│ /home/moe/.local/lib/python3.11/site-packages/flax/linen/module.py:864 in _call_wrapped_method │\r\n│ │\r\n│ 861 │ # call method │\r\n│ 862 │ if _use_named_call: │\r\n│ 863 │ │ with jax.named_scope(_derive_profiling_name(self, fun)): │\r\n│ ❱ 864 │ │ y = fun(self, *args, **kwargs) │\r\n│ 865 │ else: │\r\n│ 866 │ │ y = fun(self, *args, **kwargs) │\r\n│ 867 │\r\n│ │\r\n│ /home/moe/.local/lib/python3.11/site-packages/transformers/models/gpt_neo/modeling_flax_gpt_neo. │\r\n│ py:209 in __call__ │\r\n│ │\r\n│ 206 │ │ batch_size = hidden_states.shape[0] │\r\n│ 207 │ │ causal_mask = jnp.broadcast_to(causal_mask, (batch_size,) + causal_mask.shape[1: │\r\n│ 208 │ │ │\r\n│ ❱ 209 │ │ attention_mask = jnp.broadcast_to(jnp.expand_dims(attention_mask, axis=(-3, -2)) │\r\n│ 210 │ │ attention_mask = combine_masks(attention_mask, causal_mask) │\r\n│ 211 │ │ │\r\n│ 212 │ │ dropout_rng = None │\r\n│ │\r\n│ /home/moe/.local/lib/python3.11/site-packages/jax/_src/numpy/lax_numpy.py:1117 in broadcast_to │\r\n│ │\r\n│ 1114 The JAX version does not necessarily return a view of the input. │\r\n│ 1115 \"\"\") │\r\n│ 1116 def broadcast_to(array: ArrayLike, shape: Shape) -> Array: │\r\n│ ❱ 1117 return util._broadcast_to(array, shape) │\r\n│ 1118 │\r\n│ 1119 │\r\n│ 1120 def _split(op: str, ary: ArrayLike, indices_or_sections: Union[int, ArrayLike], │\r\n│ │\r\n│ /home/moe/.local/lib/python3.11/site-packages/jax/_src/numpy/util.py:418 in _broadcast_to │\r\n│ │\r\n│ 415 │ │ │ │ │ for arr_d, shape_d in safe_zip(arr_shape, shape_tail)) │\r\n│ 416 │ if nlead < 0 or not compatible: │\r\n│ 417 │ msg = \"Incompatible shapes for broadcasting: {} and requested shape {}\" │\r\n│ ❱ 418 │ raise ValueError(msg.format(arr_shape, shape)) │\r\n│ 419 │ diff, = np.where(tuple(not core.symbolic_equal_dim(arr_d, shape_d) │\r\n│ 420 │ │ │ │ │ │ for arr_d, shape_d in safe_zip(arr_shape, shape_tail))) │\r\n│ 421 │ new_dims = tuple(range(nlead)) + tuple(nlead + diff) │\r\n╰──────────────────────────────────────────────────────────────────────────────────────────────────╯\r\nValueError: Incompatible shapes for broadcasting: (1, 1, 1, 2296) and requested shape (1, 1, 2048, 2048)\r\n```\r\n\r\nIf I use the following code with \r\n[eval_arc_test_dataset_solve_prefix.csv](https://github.com/huggingface/transformers/files/11471190/eval_arc_test_dataset_solve_prefix.csv) and `expected_length = config.max_position_embeddings` and `generated_max_length = len(prompt) + len(correct_answer)` after the [embedding resizing/sharding code](https://github.com/huggingface/transformers/tree/3335724376319a0c453049d0cd883504f530ff52/examples/research_projects/jax-projects/model_parallel#model-parallel-language-model-training-example) , I have the above error.\r\n\r\n```python\r\nimport pandas as pd\r\nfrom tqdm import tqdm\r\nimport jax\r\nimport numpy as np\r\nfrom transformers import FlaxGPTNeoForCausalLM, AutoTokenizer, AutoConfig\r\n\r\nmodel_name = './gpt-neo-125M'\r\nmodel = FlaxGPTNeoForCausalLM.from_pretrained(model_name)\r\nconfig = AutoConfig.from_pretrained(model_name)\r\ntokenizer = AutoTokenizer.from_pretrained(\"gpt2\", use_fast=True)\r\ntokenizer.pad_token = tokenizer.eos_token\r\n\r\n# Function to calculate character accuracy\r\ndef character_accuracy(predicted, correct):\r\n matching_chars = sum(c1 == c2 for c1, c2 in zip(predicted, correct))\r\n return matching_chars / max(len(predicted), len(correct))\r\n\r\n# Initialize counters and lists for results\r\ntotal_correct = 0\r\ntotal_rows = len(df)\r\ncorrect = []\r\nchar_accuracy = []\r\npredictions = []\r\n\r\n# Read the CSV file\r\ndf = pd.read_csv(\"./eval_arc_test_dataset_solve_prefix.csv\")\r\n\r\ndef tokenize_function(examples):\r\n # strip leading and trailing spaces from examples[\"correct_answer\"]\r\n #print(\"examples[\\\"correct_answer\\\"] = \", examples[\"correct_answer\"])\r\n examples[\"correct_answer\"] = [x.strip() for x in examples[\"correct_answer\"]]\r\n \r\n #empty resultant list\r\n prompt_correct_answer = [] \r\n \r\n #choose the smaller list to iterate\r\n small_list = len(examples[\"prompt\"]) < len(examples[\"correct_answer\"]) and examples[\"prompt\"] or examples[\"correct_answer\"]\r\n prompt_correct_answer = [examples[\"prompt\"][i]+examples[\"correct_answer\"][i] for i in range(len(small_list))] \r\n\r\n expected_length = config.max_position_embeddings #data_args.block_size\r\n\r\n #tokenized_prompt = tokenizer(prompt_correct_answer, padding=\"longest\", truncation=True, max_length=None)\r\n tokenized_prompt = tokenizer(prompt_correct_answer, padding=\"max_length\", truncation=True, max_length=expected_length)\r\n\r\n # Force the length of the input_ids and attention_mask to match the expected length\r\n tokenized_prompt[\"input_ids\"] = [seq[:expected_length] + [tokenizer.pad_token_id] * (expected_length - len(seq)) for seq in tokenized_prompt[\"input_ids\"]]\r\n tokenized_prompt[\"attention_mask\"] = [mask[:expected_length] + [0] * (expected_length - len(mask)) for mask in tokenized_prompt[\"attention_mask\"]]\r\n\r\n # Convert tokenized sequences to arrays of integers\r\n input_ids = np.array(tokenized_prompt[\"input_ids\"], dtype=np.int32)\r\n attention_mask = np.array(tokenized_prompt[\"attention_mask\"], dtype=np.int32)\r\n\r\n print(\"input_ids[0] = \", input_ids[0])\r\n\r\n return {\"input_ids\": input_ids, \"attention_mask\": attention_mask, \"labels\": input_ids}\r\n\r\n'''\r\neval_tokenized_dataset = df.map(\r\n tokenize_function,\r\n batched=True,\r\n num_proc=2,\r\n remove_columns=['prompt', 'correct_answer'],\r\n load_from_cache_file=False,\r\n)\r\n'''\r\n\r\ndef process_dataframe(df):\r\n # Convert the DataFrame to a dictionary\r\n examples = df.to_dict(orient='list')\r\n\r\n # Call the tokenize_function with the examples dictionary\r\n return tokenize_function(examples)\r\n\r\n# Process the DataFrame using the process_dataframe function\r\neval_tokenized_dataset = process_dataframe(df)\r\n\r\n# Iterate over rows in the DataFrame\r\nfor index, row in tqdm(df.iterrows(), total=total_rows):\r\n prompt = row['prompt']\r\n correct_answer = row['correct_answer']\r\n\r\n generated_max_length = len(prompt) + len(correct_answer)\r\n #print(\"generated_max_length = \", generated_max_length)\r\n\r\n #Changing the seed and thus the prng_key value below, does seem to change the outcome.\r\n seed = 1000\r\n model.seed = seed\r\n\r\n #inputs = tokenizer(prompt, return_tensors=\"np\")\r\n\r\n # Generate the answer\r\n #Changing temperature, top_k and top_p does not seem to change the outcome\r\n outputs = model.generate(\r\n input_ids = eval_tokenized_dataset[\"input_ids\"][index][None, :], \r\n max_new_tokens=generated_max_length, \r\n pad_token_id = model.config.eos_token_id, \r\n prng_key=jax.random.PRNGKey(seed),\r\n temperature=0.8,\r\n early_stopping=True,\r\n top_k=50,\r\n top_p=0.95,\r\n do_sample=True,\r\n no_repeat_ngram_size=2)\r\n\r\n output_sequence = outputs['sequences'].squeeze(0)\r\n generated_answer = tokenizer.decode(output_sequence, clean_up_tokenization_spaces=True)\r\n\r\n predictions.append(generated_answer)\r\n\r\n # Calculate correctness and character accuracy\r\n is_correct = int(generated_answer == correct_answer)\r\n char_acc = character_accuracy(generated_answer, correct_answer)\r\n\r\n # Update counters and lists\r\n total_correct += is_correct\r\n correct.append(is_correct)\r\n char_accuracy.append(char_acc)\r\n\r\n# Add the new columns to the DataFrame\r\ndf['predictions'] = predictions\r\ndf['correct'] = correct\r\ndf['character_accuracy'] = char_accuracy\r\n\r\n# Calculate and print the statistics\r\npercentage_correct = total_correct / total_rows * 100\r\navg_char_accuracy = sum(char_accuracy) / total_rows * 100\r\n\r\nprint(f\"Total correct answers: {total_correct}\")\r\nprint(f\"Percentage correct: {percentage_correct:.2f}%\")\r\nprint(f\"avg_char_accuracy: {avg_char_accuracy:.2f}%\")\r\n\r\n# Save the updated DataFrame to a new CSV file\r\ndf.to_csv(\"eval_arc_test_dataset_solve_prefix.csv\", index=False)\r\n```",
"Hey! Thanks for opening an issue. \r\nFor more help on how to use the model, I would recommend you to ask and check out [forum](https://discuss.huggingface.co/). \r\nHowever, if you want help for this particular problem, we would need a *minimal* reproducing script where you remove everything that is not necessary to reproduce the particular bug! That would help us a lot",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,683 | 1,687 | 1,687 |
NONE
| null |
### System Info
TPU v2 (I am not using the `run_clm_mp.py`, so I do not really need TPU v3-8)
- `transformers` version: 4.29.0.dev0
- Platform: Linux-5.13.0-1027-gcp-x86_64-with-glibc2.31
- Python version: 3.11.3
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?[/GPU](https://vscode-remote+ssh-002dremote-002b35-002e238-002e156-002e174.vscode-resource.vscode-cdn.net/GPU)?[/TPU](https://vscode-remote+ssh-002dremote-002b35-002e238-002e156-002e174.vscode-resource.vscode-cdn.net/TPU)?): 0.6.9 (tpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
- Using GPU in script?: <fill in> No
- Using distributed or parallel set-up in script?: <fill in> NA
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Please just use https://gist.github.com/buttercutter/df275bc0f26180cb0f77479482855b83/27ded5e8d0b416664bc2f396886510b857050679
### Expected behavior
There should not be any illegal characters like `)]` in the model output
Please advise on how the numerical values for `emb.at[:50257, :]` and `vocab_size=50264` are being derived.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23210/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23210/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23209
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23209/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23209/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23209/events
|
https://github.com/huggingface/transformers/pull/23209
| 1,700,499,938 |
PR_kwDOCUB6oc5QA29w
| 23,209 |
Test composition remote tool
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,683 | 1,683 | 1,683 |
MEMBER
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23209/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23209/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23209",
"html_url": "https://github.com/huggingface/transformers/pull/23209",
"diff_url": "https://github.com/huggingface/transformers/pull/23209.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23209.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23208
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23208/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23208/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23208/events
|
https://github.com/huggingface/transformers/pull/23208
| 1,700,410,433 |
PR_kwDOCUB6oc5QAjXj
| 23,208 |
Proposed fix for TF example now running on safetensors.
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"CI seems back to normal, can you just rebase on main to get the pin on the `tensofrlow_probability`?"
] | 1,683 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23208/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23208/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23208",
"html_url": "https://github.com/huggingface/transformers/pull/23208",
"diff_url": "https://github.com/huggingface/transformers/pull/23208.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23208.patch",
"merged_at": 1683651868000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23206
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23206/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23206/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23206/events
|
https://github.com/huggingface/transformers/issues/23206
| 1,700,229,914 |
I_kwDOCUB6oc5lV3Ma
| 23,206 |
NER Pipeline: Entities group with multiple hyphens
|
{
"login": "phungthomas",
"id": 13605853,
"node_id": "MDQ6VXNlcjEzNjA1ODUz",
"avatar_url": "https://avatars.githubusercontent.com/u/13605853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phungthomas",
"html_url": "https://github.com/phungthomas",
"followers_url": "https://api.github.com/users/phungthomas/followers",
"following_url": "https://api.github.com/users/phungthomas/following{/other_user}",
"gists_url": "https://api.github.com/users/phungthomas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phungthomas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phungthomas/subscriptions",
"organizations_url": "https://api.github.com/users/phungthomas/orgs",
"repos_url": "https://api.github.com/users/phungthomas/repos",
"events_url": "https://api.github.com/users/phungthomas/events{/privacy}",
"received_events_url": "https://api.github.com/users/phungthomas/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Can you use `B-tax_percent` instead ? It allows preserving the logic and should work out of the box, no ?",
"Thank you for your quick response. Yes I could rename them, and at the end that is what I did, but isn't it still a bug? I did not expect the names of my entities to be truncated.\r\n\r\nIn my case it would not quite work out of the box because I would need to pre-process the label names from my dataset to change the hyphens to underscores for training, and during inference post-process them to put the hyphens again, but of course it is not a big deal and that would be mostly a one-time thing.",
"> Yes I could rename them, and at the end that is what I did, but isn't it still a bug? I did not expect the names of my entities to be truncated.\r\n\r\nSort of, `B-` and `I-` are respected conventions and we could definitely split only once to preseve the other, but there's always going to be some specification there (like why `B` and `I`).\r\n\r\nPRs are welcome if you want to fix ! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,683 | 1,686 | 1,686 |
NONE
| null |
### System Info
- `transformers` version: 4.27.4
- Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.13.3
- PyTorch version (GPU?): 2.0.0+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I have a BERT model that has labels such as `B-tax-percent` or `I-tax-amount`.
When I do an inference on my model, when grouping the entities, I only get the last part of my entity name, for example `percent` or `amount` instead of `tax-percent` or `tax-amount`.
Here is an example:
**config.json**:
```json
{
"_name_or_path": "Geotrend/distilbert-base-en-fr-cased",
"activation": "gelu",
"architectures": [
"DistilBertForTokenClassification"
],
"attention_dropout": 0.1,
"dim": 768,
"dropout": 0.1,
"hidden_dim": 3072,
"id2label": {
"0": "O",
"1": "B-curr",
"10": "B-date",
"11": "B-payment-date",
"12": "B-tax_^-percent",
// [...]
```
**inference.py:**
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-fr-cased")
model = AutoModelForTokenClassification.from_pretrained("./data/models")
nerpipeline = pipeline('ner', model=model, tokenizer=tokenizer, device=0, aggregation_strategy="average")
print(nerpipeline("""37.29%""))```
```
**Output**
```python
[{'entity_group': 'percent', 'score': 0.4425447, 'word': '37', 'start': 0, 'end': 2}, {'entity_group': 'percent', 'score': 0.462865, 'word': '.', 'start': 2, 'end': 3}, {'entity_group': 'percent', 'score': 0.5241904, 'word': '29', 'start': 3, 'end': 5}, {'entity_group': 'percent', 'score': 0.3016571, 'word': '%', 'start': 5, 'end': 6}]
```
### Expected behavior
I would expect the entity group to be named with the full name except the "B" or "I" prefix.
```python
[{'entity_group': 'tax-percent', 'score': 0.4425447, 'word': '37', 'start': 0, 'end': 2}, {'entity_group': 'tax-percent', 'score': 0.462865, 'word': '.', 'start': 2, 'end': 3}, {'entity_group': 'tax-percent', 'score': 0.5241904, 'word': '29', 'start': 3, 'end': 5}, {'entity_group': 'tax-percent', 'score': 0.3016571, 'word': '%', 'start': 5, 'end': 6}]
```
It seems that the issue comes from this line, but maybe there is a reasoning that I am not aware of, or maybe multiple hyphens in entity group names are a bad practice?
https://github.com/huggingface/transformers/blob/04ab5605fbb4ef207b10bf2772d88c53fc242e83/src/transformers/pipelines/token_classification.py#L500
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23206/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23206/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23205
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23205/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23205/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23205/events
|
https://github.com/huggingface/transformers/pull/23205
| 1,700,221,902 |
PR_kwDOCUB6oc5P_6X_
| 23,205 |
add word-level timestamps to Whisper
|
{
"login": "hollance",
"id": 346853,
"node_id": "MDQ6VXNlcjM0Njg1Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hollance",
"html_url": "https://github.com/hollance",
"followers_url": "https://api.github.com/users/hollance/followers",
"following_url": "https://api.github.com/users/hollance/following{/other_user}",
"gists_url": "https://api.github.com/users/hollance/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hollance/subscriptions",
"organizations_url": "https://api.github.com/users/hollance/orgs",
"repos_url": "https://api.github.com/users/hollance/repos",
"events_url": "https://api.github.com/users/hollance/events{/privacy}",
"received_events_url": "https://api.github.com/users/hollance/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Design question:\r\n\r\nI added a new `return_word_timestamps` argument to `model.generate()`. This allows us to combine `return_timestamps` with `return_word_timestamps`. Note that `return_timestamps` is not required for `return_word_timestamps` to work; they use different methods of computing the timestamps.\r\n\r\nWhen `return_timestamps=True` and `return_word_timestamps=True`, the words and associated timestamps are added to the segment they belong to (this is what OpenAI does), like so:\r\n\r\n```python\r\n[\r\n {\r\n 'text': \"<|startoftranscript|><|en|><|transcribe|><|0.00|> Henry 5, Act 4, Scene 3.<|8.64|> ...\",\r\n 'offsets': [\r\n {\r\n 'text': ' Henry 5, Act 4, Scene 3.', \r\n 'timestamp': (0.0, 8.64), \r\n 'words': [ # this is the new bit\r\n (' Henry', (0.0, 1.2), 0.98),\r\n (' 5,', (1.3, 1.5), 0.95),\r\n (' Act', (2.2, 2.9), 0.57),\r\n ...\r\n ]\r\n },\r\n ...\r\n```\r\n\r\nWhen `return_timestamps=False` and `return_word_timestamps=True`, there are no segments and the word timestamps would look something like this: \r\n\r\n```python\r\n[\r\n {\r\n 'text': \"<|startoftranscript|><|en|><|transcribe|><|0.00|> Henry 5, Act 4, Scene 3.<|8.64|> ...\",\r\n 'words': [\r\n (' Henry', (0.0, 1.2), 0.98),\r\n (' 5,', (1.3, 1.5), 0.95),\r\n (' Act', (2.2, 2.9), 0.57),\r\n ...\r\n ]\r\n },\r\n ...\r\n```\r\n\r\nFor CTC models, the ASR pipeline lets you do `return_timestamps=\"words\"`. So instead of having a separate argument, it might be better to overload `return_timestamps` for this in Whisper as well. Then the question is: if using `\"words\"` do we also do the regular timestamp prediction or not?\r\n\r\nAlso interesting: The [whisper-timestamped](https://github.com/linto-ai/whisper-timestamped) repo uses the word-level timestamps to generate more accurate segment timestamps.\r\n\r\nSo the options are:\r\n\r\n1. have separate `return_timestamps` and `return_word_timestamps` arguments\r\n2. allow `return_timestamps=\"words\"`, which implies we also do the regular timestamps\r\n3. allow `return_timestamps=\"words\"`, but don't do regular timestamps\r\n4. allow `return_timestamps=\"words\"` and also use this to improve the regular timestamps as in the whisper-timestamped repo\r\n\r\nI'm learning towards option 4 but curious to hear other opinions.\r\n",
"Thanks for the explanation! Sounds good to me adding the new argument for word-level timestamps in `model.generate`. I would in favour of computing the segment-level timestamps when using pipeline with `return_timestamps=\"word\"` (option 2), since computing segment-level timestamps has been shown to greatly reduce Whisper's propensity to hallucinate with our chunking+batching algorithm. It adds extra decoding steps (since we generate more tokens overall), but in general users seem happy with this if it means the transcriptions are returned with greater accuracy.\r\n\r\nI'm a bit tentative about option 4 since it's a bit of an unofficial implementation that doesn't really quantify the performance gain compared to OpenAI's baseline method. Most users using transformers with word-level timestamps will expect vanilla DTW that matches the official implementation, but if there's a way we can guarantee a performance gain with little added complexity / overhead then it could make sense to add this. Do you have an example that demonstrates the gain we get from using `whisper-timestamped` and a good feel for how it would boost performance?",
"> Do you have an example that demonstrates the gain we get from using `whisper-timestamped` and a good feel for how it would boost performance?\r\n\r\nVarious papers, such as the WhisperX one, claim that Whisper's timestamps are often inaccurate. The word-level timestamps are more accurate because they map the words directly to a position in the input audio. However, I don't have any actual data on this. I'm also fine with leaving the original Whisper timestamps alone. ;-)",
"> Sounds good to me adding the new argument for word-level timestamps in `model.generate`. I would in favour of computing the segment-level timestamps when using pipeline with `return_timestamps=\"word\"`\r\n\r\nWhat do you think of also using `return_timestamps=\"word\"` in `model.generate()` instead of `return_word_timestamps=True`? This would then do both regular and word-level timestamps.\r\n",
"While doing the prep work for this new feature, I found that `tokenizer.decode(..., output_offsets=True)` doesn't include the last segment if no final timestamp is predicted. I made a separate issue for discussing that: https://github.com/huggingface/transformers/issues/23231\r\n\r\nWith that in mind, maybe we should not report the word-level timestamps per segment but like this: \r\n\r\n```python\r\n[\r\n {\r\n 'text': \"<|startoftranscript|><|en|><|transcribe|><|0.00|> Henry 5, Act 4, Scene 3.<|8.64|> ...\",\r\n 'offsets': [\r\n {\r\n 'text': ' Henry 5, Act 4, Scene 3.', \r\n 'timestamp': (0.0, 8.64), \r\n }, ... \r\n ]\r\n 'words': [ # this is the new bit\r\n { 'text': ' Henry', 'timestamp': (0.0, 1.2), 'probability': 0.98 },\r\n { 'text': ' 5,', 'timestamp': (1.3, 1.5), 'probability': 0.95 },\r\n { 'text': ' Act', 'timestamp': (2.2, 2.9), 'probability': 0.57 },\r\n ...\r\n ]\r\n },\r\n```\r\n\r\nThis way we can keep it independent from the segment decoding and you just get one big list of word timestamps for the entire input file (also when using the pipeline). This is actually somewhat simpler to implement. \r\n\r\nThe question is: would users prefer it this way or per segment? (They can always write code to look at the timestamps for the segments to figure out which segment a particular word belongs to.)\r\n\r\n(OpenAI puts the word-level timestamps inside the segments.)\r\n",
"Did an initial implementation in `model.generate()`. The argument is `return_token_timestamps` instead of `return_word_timestamps` because `generate()` doesn't know what words are, only tokens. \r\n\r\nBesides a tensor of predicted `token_ids`, this now also returns a tensor with the probability for each token, and a list with `(start time, end time)` tuples. Although since we're working with just tokens, I could change this to just the starting time (since the end time is always the starting time of the next token).\r\n\r\nThe code uses a simplified version of the OpenAI implementation. In particular, it doesn't filter out the special tokens such as `<|startoftranscription|>`. I was curious if that would matter — it seems that it actually does give somewhat worse results than OpenAI, so I'll have to change this to filter out the special tokens after all. 😅 \r\n",
"> What do you think of also using return_timestamps=\"word\" in model.generate() instead of return_word_timestamps=True?\r\n\r\nI would say \"yes\" if that makes the integration with `pipeline` easier (which I think it should if we already expect the argument `return_timestamps=\"word\"` in `pipeline`)",
"IMO returning the individual words and their timestamps **separately** to the segments is fine - I can't think of an obvious use case where you'd want segment-level timestamps and then refine to word-level. You'd probably always go straight for word-level if you needed them. Also cc @Narsil here - some interesting discussions regarding how we can fit word-level timestamps into the Whisper modelling code + ASR pipeline",
"> Although since we're working with just tokens, I could change this to just the starting time (since the end time is always the starting time of the next token).\r\n\r\nThis is because the \"space\" token accounts for the time between word `i` and word `(i-1)`, which we don't return a timestamp for when we decode tokens to words?",
"> This is because the \"space\" token accounts for the time between word `i` and word `(i-1)`, which we don't return a timestamp for when we decode tokens to words?\r\n\r\nNo, it's because the alignment that is derived from the cross-attentions simply assigns a timestamp to each token. If we know which tokens should be grouped together to form a word, then the timestamp for the first token in the word is the start time of the word and the timestamp for the last token of the word is its end time. \r\n\r\nBut here we're only working with tokens, not words. Of course there are many tokens that do correspond to a whole word, but there are also words that are comprised of multiple tokes. There is no start time or end time for a given token, only \"this token happens at around this time\". So there is always a certain amount of imprecision, which is worse for longer tokens.",
"> some interesting discussions regarding how we can fit word-level timestamps into the Whisper modelling code + ASR pipeline\r\n\r\n`model.generate()` will return a list of timestamps, one for every predicted token. This should be straightforward to integrate into the the pipeline, since in `_decode_asr` it will already filter the overlapping parts of the 30-second audio chunks at the token level. Since we know which tokens will be kept / dropped, we can simply keep / drop the corresponding timestamps.\r\n\r\nOnce we we have timestamps at the token level, we can use some basic rules (copied from OpenAI) to fuse these tokens and their timestamps into actual words, including punctuation.\r\n\r\nSo the integration with the pipeline should be less tricky than I initially thought. 😄 ",
"Another design question:\r\n\r\nOpenAI's implementation returns the probability of each word. This is the mean over the probabilities of the tokens making up the word. Seems useful enough to include this.\r\n\r\nI changed `model.generate()` to return the probability of each token along with the token timestamps. However, these probabilities aren't actually used to derive the timestamps.\r\n\r\nSo maybe we don't need to include code for this at all. Right now, if the user wants to get probabilities, they can simply pass in `return_scores=True` and that gives them the logits. Then they can apply softmax etc themselves to get the probabilities. The pipeline would also use `return_scores` to get those probabilities.\r\n\r\nI'm thinking that this is the cleanest solution. Having `model.generate()` return the token probabilities is like giving it the same functionality twice, since `return_scores=True` also returns this information already (just not as probabilities but as logits).",
"Here is a notebook that shows how to use the new functionality: https://colab.research.google.com/drive/10QS37Z3-5HNuiEubpb59n5GbC8-m5dQS?usp=sharing\r\n",
"The current implementation of `model.generate()` works and gives results similar to OpenAI (although not exactly the same, as we grab the cross-attentions while generating, whereas they run the model again on the generated sequence).\r\n\r\nSome more food for thought:\r\n\r\n* Larger Whisper variants give better results. However, each model needs its own `config.attention_heads`. So we'll need to update the `config.json` files on the Hub (and users who fine-tuned their models need to patch their config.json files if they want to use token-level timestamps).\r\n\r\n* I have experimented with two methods: 1) keep the special tokens when doing DTW on the cross-attention weights, 2) ignore the special tokens. OpenAI does the latter. I'm not sure which results I like better, the timing is always a bit off either way.\r\n\r\nPros of ignoring the special tokens:\r\n\r\n- this is essentially what OpenAI does, so we get similar results (but again, not exactly the same)\r\n\r\nCons of ignoring the special tokens:\r\n\r\n- not as batch-friendly (might not matter since the DTW implementation doesn't work on batches anyway)\r\n- we need a placeholder timestamp (currently using `-1.0`) for these special tokens in the output tensor, which may make the results more awkward to parse for the user\r\n\r\nEDIT: I did some more tests and with / without special tokens predicts pretty much the same timestamps, with only small differences. Keeping the special tokens in there seems to give slightly better results overall, so I'm going to revert to that.\r\n\r\nThe downside is that for padding tokens (`<|endoftext|>` at the end of sequence) it may predict nonsense timestamps, but the user will likely filter out these tokens afterwards anyway.",
"Hi @sanchit-gandhi and @Narsil, I'd like your feedback on the following:\r\n\r\nI've added `return_timestamps=\"word\"` to the ASR pipeline for Whisper. This calls `model.generate(..., return_token_timestamps=True)` to grab the token-level timestamps (see above for how that works) and then `_decode_asr()` turns this into word-level timestamps.\r\n\r\nWe now get this kind of output from the ASR pipeline:\r\n\r\n```python\r\n\"chunks\": [\r\n {\r\n \"text\": \"hi there\", \r\n \"timestamp\": (0.5, 1.6), \r\n \"words\": [\r\n {\"text\": \"hi\", \"timestamp\": (0.5, 0.9)}, \r\n {\"text\": \"there\", \"timestamp\": (1.0, 1.6)}]\r\n ]\r\n },\r\n ... next chunks ...\r\n]\r\n```\r\n\r\nIn other words, the word-level timestamps are organized per chunk. This is also how OpenAI does it. Doing it this way is a natural fit for the logic in `_decode_asr()` and `_find_longest_common_sequence()`, as the `token_timestamps` get split up exactly the same way as the regular `token_ids`.\r\n\r\nHowever, it's not what the ASR pipeline docs promise. When return_timestamps=\"word\", the expected output is:\r\n\r\n```python\r\n\"chunks\": [\r\n {\"text\": \"hi\", \"timestamp\": (0.5, 0.9)}, \r\n {\"text\": \"there\", \"timestamp\": (1.0, 1.6)}\r\n]\r\n```\r\n\r\nWe no longer have sentence fragments (such as `\"hi there\"`) and their timestamps, but only individual words. I could do it like this with a bit of a hack, but note that the word-level timestamps are not always reliable (sometimes the cross-attention weights get confused near the end of the audio segment) and so it might be useful to keep the other timestamps as well.\r\n\r\nQuestion 1: Which of the above outputs should we use?\r\n\r\nQuestion 2: Should this output also include word probabilities? The OpenAI version does this. We could do it too, but it's not going to make the code any prettier.\r\n\r\nQuestion 3: What do you think of my modifications to `_decode_asr()` and the tokenizer, is this the right way to go?\r\n\r\n(Note: There are a few more details for me to implement, so the code isn't 100% ready yet, but most of the logic is there.)",
"Made some modifications. The output now includes a list of tokens for every word:\r\n\r\n```python\r\n\"chunks\": [\r\n {\r\n \"text\": \"hi there\", \r\n \"timestamp\": (0.5, 1.6), \r\n \"words\": [\r\n {\"text\": \"hi\", \"timestamp\": (0.5, 0.9), \"tokens\": [123]}, \r\n {\"text\": \"there\", \"timestamp\": (1.0, 1.6), \"tokens: [456, 789]},\r\n ]\r\n },\r\n ... next chunks ...\r\n]\r\n```\r\n\r\nIf we also decide to add the probabilities (see above), then the output would look like this:\r\n\r\n```python\r\n\"chunks\": [\r\n {\r\n \"text\": \"hi there\", \r\n \"timestamp\": (0.5, 1.6), \r\n \"words\": [\r\n {\"text\": \"hi\", \"timestamp\": (0.5, 0.9), \"tokens\": [123], \"probability\": 0.98}, \r\n {\"text\": \"there\", \"timestamp\": (1.0, 1.6), \"tokens: [456, 789], \"probability\": 0.87},\r\n ]\r\n },\r\n ... next chunks ...\r\n]\r\n```\r\n",
"Link to a new Colab notebook demonstrating the pipeline with word-level timestamps: https://colab.research.google.com/drive/1hwTlVlkATbyXZCZ0XY5aSP7qZcBfHC-W?usp=sharing",
"With regards to your question about the output format for pipeline - the one that you've settled on sounds sensible to me given that the timestamp prediction is inherently different for words vs segments.\r\n\r\nI wonder if having the probabilities would be more useful than the tokens though? I can see use cases where you default back to the segment-level timestamps if the word-level ones are low confidence. Not sure when the tokens would necessarily be useful?",
"> I wonder if having the probabilities would be more useful than the tokens though? I can see use cases where you default back to the segment-level timestamps if the word-level ones are low confidence. Not sure when the tokens would necessarily be useful?\r\n\r\nJust to clarify: the probabilities are for the tokens / words, not the timestamps. \r\n\r\nI also can't think off the top of my head what you'd want to have the tokens for. ;-)\r\n",
"Kindly requesting a second review from @ArthurZucker in @Narsil's absence 🤗",
"Hi @amyeroberts, I think this PR is ready for a final core maintainer review. Thanks!",
"Made the requested changes, so this should be ready to go.",
"OK, I think that should be everything then. Feel free to merge!"
] | 1,683 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
# What does this PR do?
Our implementation of Whisper currently can return timestamps but these cover long-ish segments of text and are not necessarily very accurate. This PR adds a method of predicting timestamps at the word (or even token) level, by analyzing the cross-attentions and applying dynamic time warping. This is also the method that OpenAI uses for their `word_timestamps` option, and the implementation in this PR is heavily based on their code.
For a preliminary exploration of how to do this with HF Transformers, [see this Colab notebook](https://colab.research.google.com/drive/1VWbAgzKWQsStdAA1hcumBU2uyFQX7zAB?usp=sharing).
Fixes https://github.com/huggingface/transformers/issues/21412 and https://github.com/huggingface/transformers/issues/22590
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23205/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23205/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23205",
"html_url": "https://github.com/huggingface/transformers/pull/23205",
"diff_url": "https://github.com/huggingface/transformers/pull/23205.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23205.patch",
"merged_at": 1687362502000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23204
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23204/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23204/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23204/events
|
https://github.com/huggingface/transformers/pull/23204
| 1,700,189,395 |
PR_kwDOCUB6oc5P_zRG
| 23,204 |
New version of Accelerate for the Trainer
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,683 | 1,683 | 1,683 |
COLLABORATOR
| null |
# What does this PR do?
All is said in the title, the ongoing efforts to migrate the Trainer to Accelerate require the new version of Accelerate.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23204/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23204/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23204",
"html_url": "https://github.com/huggingface/transformers/pull/23204",
"diff_url": "https://github.com/huggingface/transformers/pull/23204.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23204.patch",
"merged_at": 1683553628000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23203
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23203/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23203/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23203/events
|
https://github.com/huggingface/transformers/issues/23203
| 1,699,969,359 |
I_kwDOCUB6oc5lU3lP
| 23,203 |
Whisper feature extraction: tiny condition check error
|
{
"login": "ozancaglayan",
"id": 330946,
"node_id": "MDQ6VXNlcjMzMDk0Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/330946?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ozancaglayan",
"html_url": "https://github.com/ozancaglayan",
"followers_url": "https://api.github.com/users/ozancaglayan/followers",
"following_url": "https://api.github.com/users/ozancaglayan/following{/other_user}",
"gists_url": "https://api.github.com/users/ozancaglayan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ozancaglayan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ozancaglayan/subscriptions",
"organizations_url": "https://api.github.com/users/ozancaglayan/orgs",
"repos_url": "https://api.github.com/users/ozancaglayan/repos",
"events_url": "https://api.github.com/users/ozancaglayan/events{/privacy}",
"received_events_url": "https://api.github.com/users/ozancaglayan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @sanchit-gandhi ",
"Should be fixed by https://github.com/huggingface/transformers/pull/21998",
"Closed via #21998"
] | 1,683 | 1,686 | 1,686 |
NONE
| null |
Hi
The `frame_width` below should be probably compared against `n_fft`, otherwise this `if` will always execute for nothing but not necessarily apply a padding as the frame width is already OK
https://github.com/huggingface/transformers/blob/04ab5605fbb4ef207b10bf2772d88c53fc242e83/src/transformers/models/whisper/feature_extraction_whisper.py#L163
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23203/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23203/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23202
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23202/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23202/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23202/events
|
https://github.com/huggingface/transformers/issues/23202
| 1,699,964,539 |
I_kwDOCUB6oc5lU2Z7
| 23,202 |
ImportError: cannot import name 'OpenLlamaForCausalLM' from 'transformers'
|
{
"login": "MoaazZaki",
"id": 44510702,
"node_id": "MDQ6VXNlcjQ0NTEwNzAy",
"avatar_url": "https://avatars.githubusercontent.com/u/44510702?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MoaazZaki",
"html_url": "https://github.com/MoaazZaki",
"followers_url": "https://api.github.com/users/MoaazZaki/followers",
"following_url": "https://api.github.com/users/MoaazZaki/following{/other_user}",
"gists_url": "https://api.github.com/users/MoaazZaki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MoaazZaki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MoaazZaki/subscriptions",
"organizations_url": "https://api.github.com/users/MoaazZaki/orgs",
"repos_url": "https://api.github.com/users/MoaazZaki/repos",
"events_url": "https://api.github.com/users/MoaazZaki/events{/privacy}",
"received_events_url": "https://api.github.com/users/MoaazZaki/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This works for me on main. Are you sure you have pulled the latest changes if installing from the repo, or that you execute the code in the same environment you installed Transformers from source in?",
"It seems like it was an issue with pip failed to replace the current installed version with the main. Doing the following solve everthing:\r\n\r\n1- `pip uninstall transformers`\r\n2- Cloning the repo & `pip install -e .`\r\n\r\nThanks for the help 🙌"
] | 1,683 | 1,683 | 1,683 |
NONE
| null |
### System Info
- `transformers` version: 4.29.0.dev0
- Platform: Linux-5.15.0-1035-aws-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.0+cu117 (False)
- Tensorflow version (GPU?): 2.11.1 (False)
- Flax version (CPU?/GPU?/TPU?): 0.5.3 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. pip install git+https://github.com/huggingface/transformers.git#egg=transformers or clone & pip install -e .
2. `from transformers import OpenLlamaForCausalLM`
### Expected behavior
The module `OpenLlamaForCausalLM` is imported successfully, as I tried the latest version.
am I doing something wrong here?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23202/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23202/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23201
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23201/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23201/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23201/events
|
https://github.com/huggingface/transformers/issues/23201
| 1,699,610,100 |
I_kwDOCUB6oc5lTf30
| 23,201 |
torch.jit support
|
{
"login": "chuckhope",
"id": 27415219,
"node_id": "MDQ6VXNlcjI3NDE1MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/27415219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chuckhope",
"html_url": "https://github.com/chuckhope",
"followers_url": "https://api.github.com/users/chuckhope/followers",
"following_url": "https://api.github.com/users/chuckhope/following{/other_user}",
"gists_url": "https://api.github.com/users/chuckhope/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chuckhope/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chuckhope/subscriptions",
"organizations_url": "https://api.github.com/users/chuckhope/orgs",
"repos_url": "https://api.github.com/users/chuckhope/repos",
"events_url": "https://api.github.com/users/chuckhope/events{/privacy}",
"received_events_url": "https://api.github.com/users/chuckhope/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"No it's not used by default since the Transformers library can't know whether you are going to use your model for training or inference. It also comes with constraints (like duplicating shared weights) so it's up to the user to activate it if the situation suits their needs. You can pass `torchscript=True` when loading your model to have the jit-compilation done for you.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,683 | 1,686 | 1,686 |
NONE
| null |
### Feature request
Hi there, is torch.jit used by default for model inference in the transformers library, for example, in the Auto series APIs? If not, why isn't it used by default? Thank you.
### Motivation
None
### Your contribution
None
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23201/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23201/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23200
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23200/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23200/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23200/events
|
https://github.com/huggingface/transformers/pull/23200
| 1,699,604,378 |
PR_kwDOCUB6oc5P9zjx
| 23,200 |
Update language_modeling.py
|
{
"login": "rishabhstha",
"id": 46356382,
"node_id": "MDQ6VXNlcjQ2MzU2Mzgy",
"avatar_url": "https://avatars.githubusercontent.com/u/46356382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rishabhstha",
"html_url": "https://github.com/rishabhstha",
"followers_url": "https://api.github.com/users/rishabhstha/followers",
"following_url": "https://api.github.com/users/rishabhstha/following{/other_user}",
"gists_url": "https://api.github.com/users/rishabhstha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rishabhstha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rishabhstha/subscriptions",
"organizations_url": "https://api.github.com/users/rishabhstha/orgs",
"repos_url": "https://api.github.com/users/rishabhstha/repos",
"events_url": "https://api.github.com/users/rishabhstha/events{/privacy}",
"received_events_url": "https://api.github.com/users/rishabhstha/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23200). All of your documentation changes will be reflected on that endpoint.",
"This code is deprecated and not maintained anymore. To preprocess your data, we recommend you use the `datasets` library.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,683 | 1,686 | 1,686 |
NONE
| null |
Title: Add truncate_seq_pair function to TextDatasetForNextSentencePrediction
Description:
This PR adds a `truncate_seq_pair()` function to the `TextDatasetForNextSentencePrediction` class, inside the `create_examples_from_document()` function. This function truncates the sequences if they exceed the maximum number of tokens, which is useful when dealing with very long input texts. By including this function, the generated examples will adhere to the maximum sequence length constraint, which is important for the proper functioning of the model during pre-training.
Fixes # (issue)
## Before submitting
- [x] I read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section.
- [x] I wrote any new necessary tests.
## Who can review?
- text models: @ArthurZucker and @younesbelkada
- tokenizers: @ArthurZucker
- trainer: @sgugger
Please let me know if there are any changes or improvements needed.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23200/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23200/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23200",
"html_url": "https://github.com/huggingface/transformers/pull/23200",
"diff_url": "https://github.com/huggingface/transformers/pull/23200.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23200.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23199
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23199/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23199/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23199/events
|
https://github.com/huggingface/transformers/issues/23199
| 1,699,564,636 |
I_kwDOCUB6oc5lTUxc
| 23,199 |
Mismatch between config.vocab_size and len(tokenizer) in Flan-T5
|
{
"login": "Hannibal046",
"id": 38466901,
"node_id": "MDQ6VXNlcjM4NDY2OTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/38466901?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hannibal046",
"html_url": "https://github.com/Hannibal046",
"followers_url": "https://api.github.com/users/Hannibal046/followers",
"following_url": "https://api.github.com/users/Hannibal046/following{/other_user}",
"gists_url": "https://api.github.com/users/Hannibal046/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hannibal046/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hannibal046/subscriptions",
"organizations_url": "https://api.github.com/users/Hannibal046/orgs",
"repos_url": "https://api.github.com/users/Hannibal046/repos",
"events_url": "https://api.github.com/users/Hannibal046/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hannibal046/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"See #4875.",
"Got it. Thanks! Voting for clarification.",
"I gather from the thread that it shouldn't be a problem. The size was increased for the ease of GPU usage. Is this creating any issues in inferencing or training the model? Just want to know for better understanding!",
"Ok, here is my usecase.\r\nI usually calculate loss myself rather than pass `labels` into a model for automatic loss calculation.\r\nFor example, when calculating commonly used **Cross Entropy Loss**, i would have to figure out vocab size myself.\r\n```python\r\nloss = F.cross_entropy(\r\n outputs.logits.view(-1,VOCAB_SIZE),\r\n labels.view(-1),\r\n label_smoothing=cfg.trainer.label_smoothing_factor\r\n )\r\n```\r\nSo i think in cases where `vocab_size` matters, the value from from `config.vocab_size` and `len(tokenzier)` should be consistent.",
"The pre-trained model which is provided by google had its vocab_size manually set to 32128 by them. Here's what I found in their github:\r\n<img width=\"809\" alt=\"Screenshot 2023-05-20 at 9 54 23 AM\" src=\"https://github.com/huggingface/transformers/assets/118152679/86b3bbf1-8d72-4b9f-807b-3ab6dfb3a2ef\">\r\nDid you try setting VOCAB_SIZE manually?",
"Hi, thanks for the response! I just mean when i need to know the value of `vocab_size`, imo it should be consistent with `len(tokenzier)` and `config.vocab_size` in huggingface."
] | 1,683 | 1,684 | 1,684 |
NONE
| null |
### System Info
- `transformers` version: 4.28.1
- Platform: Linux-5.15.0-1023-azure-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoTokenizer,AutoConfig
models = [
"google/flan-t5-small",
"google/flan-t5-base",
"google/flan-t5-large",
"google/flan-t5-xl",
"google/flan-t5-xxl",
]
for model in models:
config = AutoConfig.from_pretrained(model)
tokenizer = AutoTokenizer.from_pretrained(model)
print(f"{model}\n\tlen(tokenizer)={len(tokenizer)},tokenizer.vocab_size={tokenizer.vocab_size},config.vocab_size={config.vocab_size}")
```

### Expected behavior
The two are matched.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23199/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23199/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23198
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23198/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23198/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23198/events
|
https://github.com/huggingface/transformers/issues/23198
| 1,699,414,158 |
I_kwDOCUB6oc5lSwCO
| 23,198 |
Should group_text in run_clm.py separate documents with special tokens?
|
{
"login": "verdimrc",
"id": 2340781,
"node_id": "MDQ6VXNlcjIzNDA3ODE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2340781?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/verdimrc",
"html_url": "https://github.com/verdimrc",
"followers_url": "https://api.github.com/users/verdimrc/followers",
"following_url": "https://api.github.com/users/verdimrc/following{/other_user}",
"gists_url": "https://api.github.com/users/verdimrc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/verdimrc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/verdimrc/subscriptions",
"organizations_url": "https://api.github.com/users/verdimrc/orgs",
"repos_url": "https://api.github.com/users/verdimrc/repos",
"events_url": "https://api.github.com/users/verdimrc/events{/privacy}",
"received_events_url": "https://api.github.com/users/verdimrc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"It shows one basic data preprocessing. It's up to you to customize it to your dataset and your needs :-)",
"Got it. Thank you @sgugger for the explanation.",
"I had similar confusion till I found this post. \r\n\r\nThis is how I address the issue\r\n```\r\n def tokenize_function(examples):\r\n assert tokenizer.pad_token is not None\r\n\r\n with CaptureLogger(tok_logger) as cl:\r\n output = tokenizer(\r\n examples[text_column_name],\r\n truncation=True, \r\n max_length=block_size,\r\n padding=\"max_length\",\r\n )\r\n # clm input could be much much longer than block_size\r\n if \"Token indices sequence length is longer than the\" in cl.out:\r\n tok_logger.warning(\r\n \"^^^^^^^^^^^^^^^^ Please ignore the warning above - this long input will be chunked into smaller bits\"\r\n \" before being passed to the model.\"\r\n )\r\n return output\r\n```\r\n"
] | 1,683 | 1,686 | 1,683 |
NONE
| null |
### System Info
- transformers version: 4.28.1
- platform: OSX Ventura 13.3.1 (M1)
- python version: 3.11.3
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I observe that when running `run_clm.py` with gptj tokenizer, the `group_texts()` doesn't separate different "document" with a special token (for gptj tokenizer, eos = bos = padding). Is this something I need to handle myself?
Snippet from `run_clm.py`:
```python
from datasets import load_dataset
def tokenize_function(examples, text_column_name="text"):
...
# Main data processing function that will concatenate all texts from our dataset and generate chunks of block_size.
def group_texts(examples):
...
raw_datasets = load_dataset("wikitext", "wikitext-2-raw-v1", split="train[:5]")
tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
remove_columns=list(raw_datasets.features),
)
block_size = 8
lm_datasets = tokenized_datasets.map(group_texts, batched=True)
```
Inspecting `lm_datasets` shows the follows:
```python
>>> print(raw_datasets['text'])
['', ' = Valkyria Chronicles III = \n', '', ' Senjō no Valkyria ...', ...]
>>> print(tokenized_datasets['input_ids'])
[[], [796, 569, 18354, 7496, 17740, 6711, 796, 220, 198], [], [2311, 73, 13090, 645, 569, 18354, 7496, ...], ...]
>>> print(lm_datasets['input_ids'])
[[796, 569, 18354, 7496, 17740, 6711, 796, 220], [198, 2311, 73, 13090, 645, 569, 18354, 7496], ...]
```
As shown above, there's no eos or sep token (gptj tokenizer uses`<|endoftext|>` aka 50256 for both) in the `lm_datasets`
### Expected behavior
My understanding from the official tutorial ([link](https://www.youtube.com/watch?v=8PmhEIXhBvI&t=103s)), is to separate different documents with a special tokens.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23198/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23198/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23197
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23197/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23197/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23197/events
|
https://github.com/huggingface/transformers/issues/23197
| 1,699,260,830 |
I_kwDOCUB6oc5lSKme
| 23,197 |
BioGPT causal language model with unexpected error
|
{
"login": "junoriosity",
"id": 5286536,
"node_id": "MDQ6VXNlcjUyODY1MzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5286536?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/junoriosity",
"html_url": "https://github.com/junoriosity",
"followers_url": "https://api.github.com/users/junoriosity/followers",
"following_url": "https://api.github.com/users/junoriosity/following{/other_user}",
"gists_url": "https://api.github.com/users/junoriosity/gists{/gist_id}",
"starred_url": "https://api.github.com/users/junoriosity/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/junoriosity/subscriptions",
"organizations_url": "https://api.github.com/users/junoriosity/orgs",
"repos_url": "https://api.github.com/users/junoriosity/repos",
"events_url": "https://api.github.com/users/junoriosity/events{/privacy}",
"received_events_url": "https://api.github.com/users/junoriosity/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey @junoriosity 👋 \r\n\r\nTwo notes:\r\n1. It seems like you are trying to generate text using BioGPT. Have you seen our `.generate()` function? ([guide](https://huggingface.co/docs/transformers/generation_strategies), [blog post](https://huggingface.co/blog/how-to-generate)); If you still want to do it manually, you need to configure the attention mask, which is why you see the exception. The attention mask is expected to have the shape `[batch_size, seq_len]`, where `seq_len` is the number of all input tokens so far (including the ones in `past_key_values`). \r\n2. When you share a script in an open-source repository for the contributors to help and/or debug, ensure it is self-contained (including imports and, if needed, all the data). We have many requests for help, which we can only attend at a decent pace if you help us too 🤗 See the script below for an example of a complete stand-alone reproducer:\r\n\r\n```python\r\nimport torch\r\nimport torch.nn.functional as F\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\n\r\ndevice = \"cuda\"\r\ntokenizer = AutoTokenizer.from_pretrained(\"microsoft/biogpt\")\r\nmodel = AutoModelForCausalLM.from_pretrained(\"microsoft/biogpt\").to(device)\r\n\r\ninput_sequence = \"Hello, I'm a language model,\"\r\n\r\ninputs = torch.as_tensor(tokenizer.encode(input_sequence)).unsqueeze(0).to(device)\r\npast_key_values = None\r\n\r\ncount = 0\r\ncomplete_token = []\r\nwith torch.no_grad():\r\n while count < 10:\r\n count += 1\r\n print(\"Iteration no.: \" + str(count))\r\n if count > 1:\r\n inputs = input_token\r\n\r\n model_out = model(input_ids=inputs.to(device), past_key_values=past_key_values)\r\n logits = model_out.logits[:, -1, :]\r\n past_key_values = model_out.past_key_values\r\n\r\n topk_values, topk_indices = torch.topk(logits, 5)\r\n\r\n log_probs = F.softmax(topk_values, dim=-1)\r\n inputs_in_topk = torch.multinomial(log_probs, num_samples=1, replacement=True)\r\n input_token = torch.gather(topk_indices, 1, inputs_in_topk)\r\n complete_token.append(input_token)\r\n```",
"@gante Sorry, I have used a Jupyter notebook and so the initial loading of libraries etc. is something I usually do in the first paragraphs - and I have overlooked it. My bad ... sorry. 🙂\r\n\r\nRegarding the attention mask, this is very new to me. For instance, when I make use of the OPT models, I just have to enter the the `past_key_values` like before (essentially `model_out.past_key_values`) and everything is fine. \r\n\r\nI have no clue right now, where I can get the `attention_mask` from resp. how I can create such an object from scratch.\r\nIf you could help me here, that would be awesome. 🙂",
"@junoriosity The attention mask is simply an integer tensor with the same shape as the inputs, with `1` on real tokens and `0` on padding. In particular for the attention mask update at generation time, see [this line](https://github.com/huggingface/transformers/blob/006da469dd5a465f4551f4245f780e3b1e92b76c/src/transformers/generation/utils.py#L766). However, it assumes that a starting attention mask exists, which you can obtain from `tokenizer(input_sequence).attention_mask`. \r\n\r\nBTW, I would highly recommend using `.generate()` unless you are experimenting with new decoding strategies. There are many corner cases handled in there :)",
"@gante Many thanks for your suggestion. 🙂 Here is what I did now :\r\n\r\n```\r\nimport torch\r\nimport torch.nn.functional as F\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\n\r\ndevice = \"cuda\"\r\ntokenizer = AutoTokenizer.from_pretrained(\"microsoft/biogpt\")\r\nmodel = AutoModelForCausalLM.from_pretrained(\"microsoft/biogpt\").to(device)\r\n\r\ninput_sequence = \"Hello, I'm a language model,\"\r\n\r\ninputs = torch.as_tensor(tokenizer.encode(input_sequence)).unsqueeze(0).to(device)\r\nattention_mask = torch.as_tensor(tokenizer(input_sequence).attention_mask).unsqueeze(0).to(device)\r\npast_key_values = None\r\n\r\ncount = 0\r\ncomplete_token = []\r\nwith torch.no_grad():\r\n while count < 10:\r\n count += 1\r\n print(\"Iteration no.: \" + str(count))\r\n if count > 1:\r\n inputs = input_token\r\n \r\n print(inputs.to(device))\r\n print(attention_mask)\r\n \r\n model_out = model(input_ids=inputs.to(device), attention_mask=attention_mask, past_key_values=past_key_values)\r\n logits = model_out.logits[:, -1, :]\r\n past_key_values = model_out.past_key_values\r\n\r\n topk_values, topk_indices = torch.topk(logits, 5)\r\n\r\n log_probs = F.softmax(topk_values, dim=-1)\r\n inputs_in_topk = torch.multinomial(log_probs, num_samples=1, replacement=True)\r\n input_token = torch.gather(topk_indices, 1, inputs_in_topk)\r\n attention_mask = torch.as_tensor([1]).unsqueeze(0).to(device)\r\n complete_token.append(input_token)\r\n```\r\n\r\nand the output is \r\n\r\n```\r\nIteration no.: 1\r\ntensor([[ 2, 313, 3666, 399, 7, 174, 4617, 659, 14, 2545, 144, 7]],\r\n device='cuda:0')\r\ntensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], device='cuda:0')\r\nIteration no.: 2\r\ntensor([[8]], device='cuda:0')\r\ntensor([[1]], device='cuda:0')\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n/tmp/ipykernel_11426/272587798.py in <cell line: 3>()\r\n 11 print(attention_mask)\r\n 12 \r\n---> 13 model_out = model(input_ids=inputs.to(device), attention_mask=attention_mask, past_key_values=past_key_values)\r\n 14 logits = model_out.logits[:, -1, :]\r\n 15 past_key_values = model_out.past_key_values\r\n\r\n~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)\r\n 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\r\n 1193 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1194 return forward_call(*input, **kwargs)\r\n 1195 # Do not call functions when jit is used\r\n 1196 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\n~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/biogpt/modeling_biogpt.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, past_key_values, labels, use_cache, output_attentions, output_hidden_states, return_dict)\r\n 677 return_dict = return_dict if return_dict is not None else self.config.use_return_dict\r\n 678 \r\n--> 679 outputs = self.biogpt(\r\n 680 input_ids,\r\n 681 attention_mask=attention_mask,\r\n\r\n~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)\r\n 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\r\n 1193 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1194 return forward_call(*input, **kwargs)\r\n 1195 # Do not call functions when jit is used\r\n 1196 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\n~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/biogpt/modeling_biogpt.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)\r\n 589 )\r\n 590 else:\r\n--> 591 layer_outputs = decoder_layer(\r\n 592 hidden_states,\r\n 593 attention_mask=attention_mask,\r\n\r\n~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)\r\n 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\r\n 1193 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1194 return forward_call(*input, **kwargs)\r\n 1195 # Do not call functions when jit is used\r\n 1196 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\n~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/biogpt/modeling_biogpt.py in forward(self, hidden_states, attention_mask, layer_head_mask, past_key_value, output_attentions, use_cache)\r\n 313 self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None\r\n 314 # add present self-attn cache to positions 1,2 of present_key_value tuple\r\n--> 315 hidden_states, self_attn_weights, present_key_value = self.self_attn(\r\n 316 hidden_states=hidden_states,\r\n 317 past_key_value=self_attn_past_key_value,\r\n\r\n~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)\r\n 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\r\n 1193 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1194 return forward_call(*input, **kwargs)\r\n 1195 # Do not call functions when jit is used\r\n 1196 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\n~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/biogpt/modeling_biogpt.py in forward(self, hidden_states, key_value_states, past_key_value, attention_mask, layer_head_mask, output_attentions)\r\n 211 if attention_mask is not None:\r\n 212 if attention_mask.size() != (bsz, 1, tgt_len, src_len):\r\n--> 213 raise ValueError(\r\n 214 f\"Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {attention_mask.size()}\"\r\n 215 )\r\n\r\nValueError: Attention mask should be of size (1, 1, 0, 12), but is torch.Size([1, 1, 1, 1])\r\n```\r\n\r\nDo you know, how would I have to set the `attention_mask` in the iteration step?\r\n",
"@junoriosity \r\n\r\nWe reserve these issues for bugs, as we don't have the capacity to provide hands-on support. I'm afraid you'll have to dig deeper in the code I shared, the answer is there :)",
"@gante I understand you well. However, from my understanding the `attention_mask` in the second run should not reflect anything related to the 12 tokens and instead just have length 1, if I just enter one token.",
"That is not correct -- you are passing 1 new token, and ~11~ N past tokens (cached in `past_key_values`) :)",
"@gante But the initial input is already 12 tokens\r\n<img width=\"1103\" alt=\"Bildschirmfoto 2023-05-10 um 16 20 06\" src=\"https://github.com/huggingface/transformers/assets/5286536/42e9537b-dc28-47e3-9cee-551f23dbdbe6\">\r\nHence, we would be at 13 tokens then ... or am I confusing something?",
"Updated the message above. You are right, not 11 tokens, but the exact number of tokens is not the important part here.\r\n\r\nOne issue remains, though: the exception is not correct in the presence of `past_key_values` and is very misleading 👀 The attention mask must be of shape `[batch_size, new+cached tokens]`, so the answer consists of concatenating `[[1]]` at the end of each iteration. I'll open a PR to fix the exception message.\r\n\r\nIn the end, there was indeed a bug, so here's a working solution. And pardon me for my pushback -- it's the only way we can keep replying to blocking issues at a good pace :)\r\n\r\n```py\r\nimport torch\r\nimport torch.nn.functional as F\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\n\r\ndevice = \"cuda\"\r\ntokenizer = AutoTokenizer.from_pretrained(\"microsoft/biogpt\")\r\nmodel = AutoModelForCausalLM.from_pretrained(\"microsoft/biogpt\").to(device)\r\n\r\ninput_sequence = \"Hello, I'm a language model,\"\r\n\r\ninputs = torch.as_tensor(tokenizer.encode(input_sequence)).unsqueeze(0).to(device)\r\nattention_mask = torch.as_tensor(tokenizer(input_sequence).attention_mask).unsqueeze(0).to(device)\r\npast_key_values = None\r\n\r\ncount = 0\r\ncomplete_token = []\r\nwith torch.no_grad():\r\n while count < 10:\r\n count += 1\r\n print(\"Iteration no.: \" + str(count))\r\n if count > 1:\r\n inputs = input_token\r\n\r\n print(inputs.to(device))\r\n print(attention_mask)\r\n print(past_key_values[0][0].shape if past_key_values else None)\r\n\r\n model_out = model(input_ids=inputs.to(device), attention_mask=attention_mask, past_key_values=past_key_values)\r\n logits = model_out.logits[:, -1, :]\r\n past_key_values = model_out.past_key_values\r\n\r\n topk_values, topk_indices = torch.topk(logits, 5)\r\n\r\n log_probs = F.softmax(topk_values, dim=-1)\r\n inputs_in_topk = torch.multinomial(log_probs, num_samples=1, replacement=True)\r\n input_token = torch.gather(topk_indices, 1, inputs_in_topk)\r\n attention_mask = torch.concat((attention_mask, torch.tensor([[1]]).to(attention_mask.device)), dim=1)\r\n complete_token.append(input_token)\r\n```",
"@gante Many thanks for all your effort! 🤗\r\n\r\nWhat is quite interesting, is that my initial approach works with \r\n\r\n```\r\nfrom transformers import BloomTokenizerFast, BloomForCausalLM\r\nfrom transformers.models.opt import OPTForCausalLM\r\nfrom transformers import AutoTokenizer\r\n```\r\n\r\ni.e., if I use\r\n\r\n```\r\ntokenizer = AutoTokenizer.from_pretrained('facebook/opt-13b')\r\nmodel = OPTForCausalLM.from_pretrained(\"facebook/opt-13b\").to(device)\r\n```\r\n\r\nresp. \r\n\r\n```\r\ntokenizer = BloomTokenizerFast.from_pretrained(\"bigscience/bloom-560m\")\r\nmodel = BloomForCausalLM.from_pretrained(\"bigscience/bloom-560m\").to(device)\r\n```\r\n\r\nPerhaps it might be sensible to standardize it - but, of course, there might be some reasons that are a hindrance, that I am not aware of. 🙂",
"Some models (like OPT) have \"better\" default behavior in the absence of attention masks. We will probably move in that direction in the future.\r\n\r\nHowever, the easier defaults come at a price -- at best, they require creating it from scratch at every forward pass (and at worst, you may get incorrect results). I'd recommend creating and manually manipulating the attention mask whenever possible, to avoid nasty surprises :) ",
"@gante In any case, many thanks for all your kind support. 🤗",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,683 | 1,686 | 1,686 |
NONE
| null |
### System Info
transformers==4.28.0
### Who can help?
@ArthurZucker @younesbelkada @gante
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
input_sequence = "Hello, I'm a language model,"
inputs = torch.as_tensor(tokenizer.encode(input_sequence)).unsqueeze(0).to(device)
past_key_values = None
count = 0
complete_token = []
with torch.no_grad():
while count<10:
count += 1
print("Iteration no.: " + str(count))
if count > 1:
inputs = input_token
model_out = model(input_ids=inputs.to(device), past_key_values=past_key_values)
logits = model_out.logits[:, -1, :]
past_key_values = model_out.past_key_values
topk_values, topk_indices = torch.topk(logits, 5)
log_probs = F.softmax(topk_values, dim=-1)
inputs_in_topk = torch.multinomial(log_probs, num_samples=1, replacement=True)
input_token = torch.gather(topk_indices, 1, inputs_in_topk)
complete_token.append(input_token)
```
### Expected behavior
I am trying to use a Causal Language Model from BioGPT. However, I got a strange error.
Here are my steps:
First, I installed `transformers` and `sacremoses`:
```
!pip install transformers sacremoses -q
```
Then I executed the code from above.
And here is the error I got:
```
Iteration no.: 1
Iteration no.: 2
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/tmp/ipykernel_18990/2689790310.py in <cell line: 8>()
13 inputs = input_token
14
---> 15 model_out = model(input_ids=inputs.to(device), past_key_values=past_key_values)
16 logits = model_out.logits[:, -1, :]
17 past_key_values = model_out.past_key_values
~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/biogpt/modeling_biogpt.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, past_key_values, labels, use_cache, output_attentions, output_hidden_states, return_dict)
677 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
678
--> 679 outputs = self.biogpt(
680 input_ids,
681 attention_mask=attention_mask,
~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/biogpt/modeling_biogpt.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
589 )
590 else:
--> 591 layer_outputs = decoder_layer(
592 hidden_states,
593 attention_mask=attention_mask,
~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/biogpt/modeling_biogpt.py in forward(self, hidden_states, attention_mask, layer_head_mask, past_key_value, output_attentions, use_cache)
313 self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None
314 # add present self-attn cache to positions 1,2 of present_key_value tuple
--> 315 hidden_states, self_attn_weights, present_key_value = self.self_attn(
316 hidden_states=hidden_states,
317 past_key_value=self_attn_past_key_value,
~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/biogpt/modeling_biogpt.py in forward(self, hidden_states, key_value_states, past_key_value, attention_mask, layer_head_mask, output_attentions)
211 if attention_mask is not None:
212 if attention_mask.size() != (bsz, 1, tgt_len, src_len):
--> 213 raise ValueError(
214 f"Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {attention_mask.size()}"
215 )
ValueError: Attention mask should be of size (1, 1, 0, 12), but is torch.Size([1, 1, 1, 1])
```
So apparently, everything went fine in the first execution, but the in the second model call this error came up.
Do you know how to fix this? 🙂
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23197/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23197/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23196
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23196/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23196/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23196/events
|
https://github.com/huggingface/transformers/pull/23196
| 1,699,241,368 |
PR_kwDOCUB6oc5P8lk1
| 23,196 |
Update convert_dialogpt_original_pytorch_checkpoint_to_pytorch.py
|
{
"login": "detasar",
"id": 19317091,
"node_id": "MDQ6VXNlcjE5MzE3MDkx",
"avatar_url": "https://avatars.githubusercontent.com/u/19317091?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/detasar",
"html_url": "https://github.com/detasar",
"followers_url": "https://api.github.com/users/detasar/followers",
"following_url": "https://api.github.com/users/detasar/following{/other_user}",
"gists_url": "https://api.github.com/users/detasar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/detasar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/detasar/subscriptions",
"organizations_url": "https://api.github.com/users/detasar/orgs",
"repos_url": "https://api.github.com/users/detasar/repos",
"events_url": "https://api.github.com/users/detasar/events{/privacy}",
"received_events_url": "https://api.github.com/users/detasar/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,683 | 1,684 | 1,684 |
NONE
| null |
The improvements include the addition of a main function and better variable naming for readability.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23196/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23196/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23196",
"html_url": "https://github.com/huggingface/transformers/pull/23196",
"diff_url": "https://github.com/huggingface/transformers/pull/23196.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23196.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23195
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23195/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23195/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23195/events
|
https://github.com/huggingface/transformers/pull/23195
| 1,699,239,682 |
PR_kwDOCUB6oc5P8lPh
| 23,195 |
Update video_classification.py
|
{
"login": "detasar",
"id": 19317091,
"node_id": "MDQ6VXNlcjE5MzE3MDkx",
"avatar_url": "https://avatars.githubusercontent.com/u/19317091?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/detasar",
"html_url": "https://github.com/detasar",
"followers_url": "https://api.github.com/users/detasar/followers",
"following_url": "https://api.github.com/users/detasar/following{/other_user}",
"gists_url": "https://api.github.com/users/detasar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/detasar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/detasar/subscriptions",
"organizations_url": "https://api.github.com/users/detasar/orgs",
"repos_url": "https://api.github.com/users/detasar/repos",
"events_url": "https://api.github.com/users/detasar/events{/privacy}",
"received_events_url": "https://api.github.com/users/detasar/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23195). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,683 | 1,686 | 1,686 |
NONE
| null |
The improvements include better handling of the libraries and more straightforward code.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23195/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23195/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23195",
"html_url": "https://github.com/huggingface/transformers/pull/23195",
"diff_url": "https://github.com/huggingface/transformers/pull/23195.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23195.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23194
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23194/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23194/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23194/events
|
https://github.com/huggingface/transformers/pull/23194
| 1,699,168,654 |
PR_kwDOCUB6oc5P8WpU
| 23,194 |
Fix hf_argparser.parse_json_file to open file with utf-8 encoding, close file when finished
|
{
"login": "RobertBaruch",
"id": 1783950,
"node_id": "MDQ6VXNlcjE3ODM5NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1783950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RobertBaruch",
"html_url": "https://github.com/RobertBaruch",
"followers_url": "https://api.github.com/users/RobertBaruch/followers",
"following_url": "https://api.github.com/users/RobertBaruch/following{/other_user}",
"gists_url": "https://api.github.com/users/RobertBaruch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RobertBaruch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RobertBaruch/subscriptions",
"organizations_url": "https://api.github.com/users/RobertBaruch/orgs",
"repos_url": "https://api.github.com/users/RobertBaruch/repos",
"events_url": "https://api.github.com/users/RobertBaruch/events{/privacy}",
"received_events_url": "https://api.github.com/users/RobertBaruch/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,683 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #23193
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23194/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23194/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23194",
"html_url": "https://github.com/huggingface/transformers/pull/23194",
"diff_url": "https://github.com/huggingface/transformers/pull/23194.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23194.patch",
"merged_at": 1683500785000
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.