url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/25624
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25624/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25624/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25624/events
|
https://github.com/huggingface/transformers/pull/25624
| 1,858,793,987 |
PR_kwDOCUB6oc5YWwbR
| 25,624 |
Fix PEFT integration failures on nightly CI
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Sounds great, thank you all !",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,692 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes all failing CIs when PEFT is installed that has the error:
```bash
TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType
```
The failing tests are `test_from_pretrained_no_checkpoint` tests that calls `xxx.from_pretrained()` with model_id explicitly being to None. When PEFT is installed (which is the case in the daily CI runners since #25077 ) `from_pretrained` will look for adapter files in the model repo, and a check that ignores `model_id` if it is set to None was forgotten.
cc @ydshieh @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25624/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25624/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25624",
"html_url": "https://github.com/huggingface/transformers/pull/25624",
"diff_url": "https://github.com/huggingface/transformers/pull/25624.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25624.patch",
"merged_at": 1692605084000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25623
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25623/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25623/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25623/events
|
https://github.com/huggingface/transformers/pull/25623
| 1,858,627,362 |
PR_kwDOCUB6oc5YWMQP
| 25,623 |
Ignore all exceptions from signal in dynamic code
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,692 | 1,692 | 1,692 |
COLLABORATOR
| null |
# What does this PR do?
#25613 shows we can have different exception types coming from `signal` when asking the user if they want to trust remote code or not. This PR ignores them all.
As a reminder this is only a convenience function trying to help the user that did not set `trust_remote_code=True`, so the errors all become "Please set `trust_remote_code`"
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25623/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25623/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25623",
"html_url": "https://github.com/huggingface/transformers/pull/25623",
"diff_url": "https://github.com/huggingface/transformers/pull/25623.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25623.patch",
"merged_at": 1692601272000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25622
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25622/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25622/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25622/events
|
https://github.com/huggingface/transformers/issues/25622
| 1,858,585,997 |
I_kwDOCUB6oc5ux8WN
| 25,622 |
mismatch error of loading CLIPVisionModelWithProjection
|
{
"login": "garychan22",
"id": 108175311,
"node_id": "U_kgDOBnKfzw",
"avatar_url": "https://avatars.githubusercontent.com/u/108175311?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/garychan22",
"html_url": "https://github.com/garychan22",
"followers_url": "https://api.github.com/users/garychan22/followers",
"following_url": "https://api.github.com/users/garychan22/following{/other_user}",
"gists_url": "https://api.github.com/users/garychan22/gists{/gist_id}",
"starred_url": "https://api.github.com/users/garychan22/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/garychan22/subscriptions",
"organizations_url": "https://api.github.com/users/garychan22/orgs",
"repos_url": "https://api.github.com/users/garychan22/repos",
"events_url": "https://api.github.com/users/garychan22/events{/privacy}",
"received_events_url": "https://api.github.com/users/garychan22/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @younesbelkada and @amyeroberts ",
"Hi @garychan22, thanks for raising this issue! \r\n\r\nThis is happening because the value of `projection_dim` being using in the [CLIPVisionModelWithProjection](https://github.com/huggingface/transformers/blob/908f853688c4d523780797f27f83af3c10418e92/src/transformers/models/clip/modeling_clip.py#L1288) is different from the value being used in [CLIPModel](https://github.com/huggingface/transformers/blob/908f853688c4d523780797f27f83af3c10418e92/src/transformers/models/clip/modeling_clip.py#L995). \r\n\r\nFor CLIPModel, the value of `projection_dim` is 1280, which is set in the [model config here](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k/blob/8c7a3583335de4dba1b07182dbf81c75137ce67b/config.json#L9).\r\n\r\nFor CLIPVisionModelWithProjection, the value of `projection_dim` is 512, because it isn't specified in the model's [vision_config](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k/blob/8c7a3583335de4dba1b07182dbf81c75137ce67b/config.json#L95) and so takes the [default value](https://github.com/huggingface/transformers/blob/977b2f05d5697f33e51111e4834a127a9a76349f/src/transformers/models/clip/configuration_clip.py#L205C2-L205C2). This is used, instead of the value of in the CLIPModel config, because the configuration class for CLIPVisionModelWithProjection is [CLIPVisionConfig](https://github.com/huggingface/transformers/blob/977b2f05d5697f33e51111e4834a127a9a76349f/src/transformers/models/clip/modeling_clip.py#L1280C2-L1280C2). \r\n\r\nTo load the checkpoint without errors, you can specify the projection dim directly when using `from_pretrained`: \r\n\r\n```python\r\nclip_vision_model = CLIPVisionModelWithProjection.from_pretrained(\r\n \"laion/CLIP-ViT-bigG-14-laion2B-39B-b160k\", projection_dim=1280\r\n)\r\n```\r\n\r\n\r\n",
"@amyeroberts thanks for your reply, using the suggested way my code works without problems "
] | 1,692 | 1,692 | 1,692 |
NONE
| null |
### System Info
Hi, I am trying to load the pretrained image encoder from laion/CLIP-ViT-bigG-14-laion2B-39B-b160k using
```
clip_vision_model = CLIPVisionModelWithProjection.from_pretrained("laion/CLIP-ViT-bigG-14-laion2B-39B-b160k")
```
and the following error appears
```
RuntimeError: Error(s) in loading state_dict for CLIPVisionModelWithProjection:
size mismatch for visual_projection.weight: copying a param with shape torch.Size([1280, 1664]) from checkpoint, the shape in current model is torch.Size([512, 1664]).
You may consider adding `ignore_mismatched_sizes=True` in the model `from_pretrained` method.
```
But I can load the pretrained image encoder using
```
clip_model = CLIPModel.from_pretrained("laion/CLIP-ViT-bigG-14-laion2B-39B-b160k")
clip_vision_model_wo_proj = clip_model.vision_model
```
### Who can help?
@muellerzr @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
As mentioned in the main body of the issue
### Expected behavior
successfully loading the pretrained image encoder without error
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25622/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25622/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25621
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25621/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25621/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25621/events
|
https://github.com/huggingface/transformers/issues/25621
| 1,858,516,152 |
I_kwDOCUB6oc5uxrS4
| 25,621 |
data2vec-audio returns different results with padded input
|
{
"login": "gau-nernst",
"id": 26946864,
"node_id": "MDQ6VXNlcjI2OTQ2ODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/26946864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gau-nernst",
"html_url": "https://github.com/gau-nernst",
"followers_url": "https://api.github.com/users/gau-nernst/followers",
"following_url": "https://api.github.com/users/gau-nernst/following{/other_user}",
"gists_url": "https://api.github.com/users/gau-nernst/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gau-nernst/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gau-nernst/subscriptions",
"organizations_url": "https://api.github.com/users/gau-nernst/orgs",
"repos_url": "https://api.github.com/users/gau-nernst/repos",
"events_url": "https://api.github.com/users/gau-nernst/events{/privacy}",
"received_events_url": "https://api.github.com/users/gau-nernst/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I just noticed this in the documentation: https://huggingface.co/docs/transformers/model_doc/data2vec#transformers.Data2VecAudioModel.forward.attention_mask\r\n\r\n> For all models whose processor has config.return_attention_mask == False, such as [data2vec-audio-base](https://huggingface.co/facebook/data2vec-audio-base-960h), attention_mask should not be passed to avoid degraded performance when doing batched inference.\r\n\r\nDoes it mean the preprocessor config is wrong?",
"cc @sanchit-gandhi and @ylacombe ",
"Apologies @gau-nernst, this slipped the net previously! @ylacombe are you able to take a look? Would be worth running some side-by-side debugging with padded and un-padded to see if there's a divergence",
"Hey @gau-nernst,\r\nFirst of all, thanks for opening the issue!\r\n\r\nI've looked into the matter, and you rightfully highlighted two shortcomings:\r\n\r\n1. As you [suggested](https://github.com/huggingface/transformers/issues/25621#issuecomment-1685586842), the preprocessor config seems indeed wrong, since `attention_mask` should be passed through the data2vec encoder to ensure correctness. Only padding with zeros will definitely won't work. \r\n2. Outputs should definitely be the same. Hidden states start to be different at the beginning of the encoder `Data2VecAudioEncoder`. I will elaborate below.\r\n\r\nI've studied a bit more where the computation starts to differ, and it happens right [here](https://github.com/huggingface/transformers/blob/5a4f340df74b42b594aedf60199eea95cdb9bed0/src/transformers/models/data2vec/modeling_data2vec_audio.py#L578-L579), when computing positional embeddings.\r\nThis seems to be definitely the only difference, since outputs are the same when commenting those two lines.\r\n\r\nTo address this issue, we should thus:\r\n\r\n1. Correct the model [documentation](https://huggingface.co/docs/transformers/model_doc/data2vec#transformers.Data2VecAudioModel.forward.attention_mask) regarding ` Data2VecAudioModel.forward.attention_mask`.\r\n2. Correct the behavior of `Data2VecAudioPositionalConvEmbedding` and more probably of its inner `Data2VecAudioPositionalConvLayer` layers, so that padded inputs are correctly computed.\r\n\r\nThis could be a great PR for you @gau-nernst, WDYT on working on this ? Of course, I'll support you in the matter if you have any questions!\r\n",
"Thank you for the detailed investigation and explanation. Do you know why `Data2VecAudioPositionalConvLayer` computes differently for padded input? From what I understand, by default PyTorch's convolution uses zero padding, so zero-padded inputs should have the same outputs. And do you know if FairSeq's implementation has this problem?",
"As I understand it, when passing padding zeros through pytorch's `conv1D`, the padding zeros will not influence the output up to the length of the output sequence without the padding. Values after this length **will not** necessarily be zero.\r\n\r\nThis poses problems because: values after this length are then non zeros for the other `Data2VecAudioPositionalConvLayer` layers' conv1D so errors accumulate.\r\n\r\nNote that it wouldn't be a problem if there were only one `Data2VecAudioPositionalConvLayer` with no layernorm, since the rest of the encoder works with an attention mask. \r\n\r\n\r\n\r\n",
"I see, that makes sense. That's why Wav2Vec2 doesn't have this issue, since it uses only 1 convolution layer for positional encoding.\r\n\r\nI think the way to fix this is to fill the values after attention mask with zeros. This has to be done after every conv layers in positional encoding. Not sure if there is a more elegant way.\r\n\r\nAnother note. It seems like the original fairseq implementation also has the same problem (padded input will have different results), since it seems like they don't do any special processing (I haven't actually run the code to check). Not sure if we should deviate from the official implementation if that's the case. ",
"> I think the way to fix this is to fill the values after attention mask with zeros. This has to be done after every conv layers in positional encoding. Not sure if there is a more elegant way.\r\n\r\nThat's exactly what is done [here](https://discuss.pytorch.org/t/how-to-treat-variable-length-sequence-in-conv1d-layers/110242/5). \r\nAnd I would agree that's the way to go. \r\n\r\nI thing that we need to discuss it further with @sanchit-gandhi, since batching (and thus padding) seems to still give correct results in the [integration tests](https://github.com/huggingface/transformers/blob/5a4f340df74b42b594aedf60199eea95cdb9bed0/tests/models/data2vec/test_modeling_data2vec_audio.py#L730-L753).\r\n\r\nI think it could be interesting to experiment a bit with your solution and check if it gives correct and coherent solutions. Would you be ok to experiement with this ? you could pass the attention mask through those layers, or you could do something with the sequence lengths. And then you can compare results with what's in the [integration tests](https://github.com/huggingface/transformers/blob/5a4f340df74b42b594aedf60199eea95cdb9bed0/tests/models/data2vec/test_modeling_data2vec_audio.py#L730-L753).\r\n\r\n\r\n\r\n\r\n",
"Note that the integration tests use `\"facebook/data2vec-audio-base-960h\"` and not `\"facebook/data2vec-audio-base\"`.",
"The model is probably robust enough so that the final predictions are not affected.\r\n\r\nDo you have any thoughts about not replicating the exact fairseq implementation? This is the [fairseq code](https://github.com/facebookresearch/fairseq/blob/b5d89cddc9e4a0af831d2aafc1ba7dbf0f1b10d0/fairseq/models/wav2vec/wav2vec2.py#L1020-L1048) and the [config file](https://github.com/facebookresearch/fairseq/blob/b5d89cddc9e4a0af831d2aafc1ba7dbf0f1b10d0/examples/data2vec/config/audio/pretraining/base_librispeech.yaml#L73)",
"Hey @gau-nernst, I had the occasion to discuss the matter internally with @sanchit-gandhi and @patrickvonplaten, and here are our thoughts!\r\n\r\nAt the moment, we have strict equivalence with the fairseq implementation, which leads us to believe that the current behavior might be intended or that it is simply an oversight from their part. In any case, we'd like to keep the default behavior since it doesn't seem to impact so much the outputs according to the the [integration tests](https://github.com/huggingface/transformers/blob/5a4f340df74b42b594aedf60199eea95cdb9bed0/tests/models/data2vec/test_modeling_data2vec_audio.py#L730-L753)!\r\n\r\nHowever, if you are really interested in the matter, you can still drive a PR to correct this behavior, provided that we keep the default behavior by default, and provided that it is really useful in terms of quality! \r\nTo do so and if you want, you can test a bit more the different options, and even do a benchmark measuring WER degradations in the different setups (w/o batching and padding, w/ batching and padding with default behavior, and w/ batching and padding with the fix). See this [comment](https://github.com/huggingface/transformers/issues/21534#issuecomment-1434869771) for an example of how to measure WER.\r\nWould you like to do that ?\r\n\r\n\r\nBTW, could you also tell me the intended use of the model and how you encountered this problem? Many thanks! If you encountered this issue while fine-tuning the model, you might want to [group samples by length](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.group_by_length), since it appears that your issue was over-amplified by a large padding-to-length ratio !\r\n",
"Sadly I don't have the bandwidth to do that experiment. I'm mainly using audio models for audio classification, so I'm interested in encoder-only models. I was evaluating which model to use, and found out the strange behaviour of different results for padded inputs for Wav2Vec2 Base and HuBERT Base models, which is due to the use of group norm. Then I tried to see if other models had this problem, thus found it for data2vec-audio.\r\n\r\nCurrently I don't use data2vec-audio models, since I think Wav2Vec2 XLS-R is much better thanks to its large pre-trained data.\r\n\r\nI believe the solution for now is to update the documentation\r\n1. Remove the warning about attention mask\r\n> For all models whose processor has config.return_attention_mask == False, such as [data2vec-audio-base](https://huggingface.co/facebook/data2vec-audio-base-960h), attention_mask should not be passed\r\n2. Add a warning that padded inputs will have different outputs, even with attention mask, due to the convolution layers in positional encodings.",
"@ylacombe What do you think of the solution I proposed above? I can submit a PR if you are ok with it. It's mainly documentation fix, since I won't have the bandwidth to do experiments with the model code.",
"Hey @gau-nernst, thanks for the remainder! it will be nice to have your contribution on a PR here! I agree with the solution you proposed for now, feel free to ping me on the PR!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,692 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.15.0-79-generic-x86_64-with-glibc2.31
- Python version: 3.10.9
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: NA
- Using distributed or parallel set-up in script?: NA
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoModel
import torch
import torch.nn.functional as F
name = "facebook/data2vec-audio-base"
model = AutoModel.from_pretrained(name).eval()
x = torch.randn(1, 16_000)
x = F.layer_norm(x, (16_000,))
out1 = model(x)
print(out1)
x_padded = torch.zeros(1, 20_000)
mask = torch.zeros(1, 20_000, dtype=torch.long)
x_padded[:, :16_000] = x
mask[:, :16_000] = 1
out2 = model(x_padded, mask)
print(out2)
length = out1.last_hidden_state.shape[1]
torch.testing.assert_close(out1.last_hidden_state, out2.last_hidden_state[:, :length])
```
`extract_features` are the same, but `last_hidden_state` is not.
### Expected behavior
The two outputs should be the same.
Note that when I change the model to `facebook/wav2vec2-xls-r-300m`, the outputs are identical. I would expect data2vec and wav2vec 2.0 have similar behavior, since they have very similar architecture. A quick glance at the source code also indicates that there should be no reason why data2vec cannot use attention mask correctly.
The preprocessor config here also indicates the model should be able to use attention mask
https://huggingface.co/facebook/data2vec-audio-base/blob/main/preprocessor_config.json
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25621/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25621/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25620
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25620/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25620/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25620/events
|
https://github.com/huggingface/transformers/issues/25620
| 1,858,274,016 |
I_kwDOCUB6oc5uwwLg
| 25,620 |
TimeSeriesTransformerForPrediction model unused parameters Runtime error in Distributed environment
|
{
"login": "maeschbacher",
"id": 76120414,
"node_id": "MDQ6VXNlcjc2MTIwNDE0",
"avatar_url": "https://avatars.githubusercontent.com/u/76120414?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maeschbacher",
"html_url": "https://github.com/maeschbacher",
"followers_url": "https://api.github.com/users/maeschbacher/followers",
"following_url": "https://api.github.com/users/maeschbacher/following{/other_user}",
"gists_url": "https://api.github.com/users/maeschbacher/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maeschbacher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maeschbacher/subscriptions",
"organizations_url": "https://api.github.com/users/maeschbacher/orgs",
"repos_url": "https://api.github.com/users/maeschbacher/repos",
"events_url": "https://api.github.com/users/maeschbacher/events{/privacy}",
"received_events_url": "https://api.github.com/users/maeschbacher/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3081136536,
"node_id": "MDU6TGFiZWwzMDgxMTM2NTM2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Difficult%20Issue",
"name": "Good Difficult Issue",
"color": "684CC7",
"default": false,
"description": ""
}
] |
open
| false | null |
[] |
[
"cc @kashif ",
"thanks @maeschbacher having a look",
"I don't think @kashif will have time to look at this, so marking as Good Difficult issue. "
] | 1,692 | 1,704 | null |
NONE
| null |
### System Info
transformers==4.31
accelerate==0.21
I'm trying to run TimeSeriesTransformerForPrediction in a distributed environment based on the following notebook: https://huggingface.co/blog/time-series-transformers
If I run the notebook as is I get the following error immediately after the first loss is calculated:
```
Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by
making sure all `forward` function outputs participate in calculating loss.
If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
Parameter indices which did not receive grad for rank 0: 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146
In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error
```
If I modify the accelerator in the following way:
```
ddp_kwargs = DistributedDataParallelKwargs(find_unused_parameters=True)
accelerator = Accelerator(kwargs_handlers=[ddp_kwargs])
```
It will usually run for many iterations but it randomly fails with an error message similar to above.
The code appears to run fine in cpu mode or on GPU in a non-distributed environment.
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the following notebook in a distributed GPU environment: https://huggingface.co/blog/time-series-transformers
### Expected behavior
I would expect the model to train and perform `.backward(loss)` without error.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25620/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25620/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/25619
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25619/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25619/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25619/events
|
https://github.com/huggingface/transformers/pull/25619
| 1,858,258,486 |
PR_kwDOCUB6oc5YU_t1
| 25,619 |
Knowledge distillation for vision guide
|
{
"login": "merveenoyan",
"id": 53175384,
"node_id": "MDQ6VXNlcjUzMTc1Mzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/merveenoyan",
"html_url": "https://github.com/merveenoyan",
"followers_url": "https://api.github.com/users/merveenoyan/followers",
"following_url": "https://api.github.com/users/merveenoyan/following{/other_user}",
"gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions",
"organizations_url": "https://api.github.com/users/merveenoyan/orgs",
"repos_url": "https://api.github.com/users/merveenoyan/repos",
"events_url": "https://api.github.com/users/merveenoyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/merveenoyan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@sayakpaul I changed the setup and didn't observe a lot of difference, but I felt like it would be still cool to show how to distill a model. WDYT?",
"cc @rafaelpadilla for reference",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25619). All of your documentation changes will be reflected on that endpoint.",
"@rafaelpadilla @NielsRogge can we merge this if this looks good? ",
"> @rafaelpadilla @NielsRogge can we merge this if this looks good?\r\n\r\nYes, it's OK to me. \r\nMy comments were merely about writing style",
"@LysandreJik can you give a review or ask for another reviewer if needed?",
"Please resolve the merge conflicts and merge @merveenoyan "
] | 1,692 | 1,697 | 1,697 |
CONTRIBUTOR
| null |
This is a draft PR that I opened in the past on KD guide for CV, but I accidentally removed my fork. I prioritized TGI docs so this PR might stay stale for a while, I will ask for a review after I iterate over comments left by @sayakpaul in my previous PR. (Mainly training MobileNet with random initial weights and not with pre-trained weights from transformers)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25619/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25619/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25619",
"html_url": "https://github.com/huggingface/transformers/pull/25619",
"diff_url": "https://github.com/huggingface/transformers/pull/25619.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25619.patch",
"merged_at": 1697629352000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25618
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25618/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25618/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25618/events
|
https://github.com/huggingface/transformers/issues/25618
| 1,858,245,347 |
I_kwDOCUB6oc5uwpLj
| 25,618 |
AttributeError: 'CTCTrainer' object has no attribute 'scaler'
|
{
"login": "aprzez",
"id": 115702397,
"node_id": "U_kgDOBuV6fQ",
"avatar_url": "https://avatars.githubusercontent.com/u/115702397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aprzez",
"html_url": "https://github.com/aprzez",
"followers_url": "https://api.github.com/users/aprzez/followers",
"following_url": "https://api.github.com/users/aprzez/following{/other_user}",
"gists_url": "https://api.github.com/users/aprzez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aprzez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aprzez/subscriptions",
"organizations_url": "https://api.github.com/users/aprzez/orgs",
"repos_url": "https://api.github.com/users/aprzez/repos",
"events_url": "https://api.github.com/users/aprzez/events{/privacy}",
"received_events_url": "https://api.github.com/users/aprzez/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @sanchit-gandhi but for research projects in `examples` you should make sure you are using the pinned version of `transformers`. (also share the output of `transformers-cli env` when you submit an issue",
"Hey @aprzez - the examples script referenced is quite outdated and no longer maintained. It uses the Hugging Face Trainer **prior** to the recent `accelerate` upgrade in v4.30.0. As such, it is not guaranteed to work out of the box, since it requires updating for the new HF Trainer internals.\r\n\r\nI would recommend you either:\r\n1. Downgrade your `transformers` version to <4.30 (less optimal, you won't get the latest features from the most recent releases)\r\n2. Use the latest version of `transformers`, but with the **updated** and **maintained** examples script [`run_speech_recognition_ctc.py`](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition#connectionist-temporal-classification) (more optimal - we maintain this and ensure it works with the latest `transformers` versions!)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,692 | 1,695 | 1,695 |
NONE
| null |
When attempting to fine-tune Wav2Vec2 XLSR on my own dataset I get this error once training starts: "AttributeError: 'CTCTrainer' object has no attribute 'scaler'"
I followed the steps outlined here and as far as I can tell have all the necessary packages installed in my conda environment: https://github.com/huggingface/transformers/blob/main/examples/research_projects/wav2vec2/FINE_TUNE_XLSR_WAV2VEC2.md#local-machine
The only change to the run_common_voice.py file is that I am loading my own data from a CSV.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25618/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25618/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25617
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25617/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25617/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25617/events
|
https://github.com/huggingface/transformers/issues/25617
| 1,858,156,785 |
I_kwDOCUB6oc5uwTjx
| 25,617 |
Add Context AutoEncoder (CAE)
|
{
"login": "charlesCXK",
"id": 23420768,
"node_id": "MDQ6VXNlcjIzNDIwNzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/23420768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/charlesCXK",
"html_url": "https://github.com/charlesCXK",
"followers_url": "https://api.github.com/users/charlesCXK/followers",
"following_url": "https://api.github.com/users/charlesCXK/following{/other_user}",
"gists_url": "https://api.github.com/users/charlesCXK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/charlesCXK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/charlesCXK/subscriptions",
"organizations_url": "https://api.github.com/users/charlesCXK/orgs",
"repos_url": "https://api.github.com/users/charlesCXK/repos",
"events_url": "https://api.github.com/users/charlesCXK/events{/privacy}",
"received_events_url": "https://api.github.com/users/charlesCXK/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false | null |
[] |
[
"I would like to work towards adding this model to huggingface transformers @NielsRogge\r\n\r\n",
"cc @amyeroberts @rafaelpadilla ",
"Hi @charlesCXK, thanks for opening this model request! \r\n\r\nThe easiest and recommended way to make a model available in `transformers` is to add the modeling code directly on the hub: https://huggingface.co/docs/transformers/custom_models. This means, once working, the model can be found and used immediately without having to go through the PR process. We find this is a lot quicker as the bar for adding code into the library is high due to the maintenance cost of every new model, and so reviews take quite a while. "
] | 1,692 | 1,692 | null |
NONE
| null |
### Model description
**The corresponding paper has been accepted by International Journal of Computer Vision (IJCV).**
We present a novel masked image modeling (MIM) approach, context autoencoder (CAE), for self-supervised representation pretraining. We pretrain an encoder by making predictions in the encoded representation space. The pretraining tasks include two tasks: masked representation prediction - predict the representations for the masked patches, and masked patch reconstruction - reconstruct the masked patches. The network is an encoder-regressor-decoder architecture: the encoder takes the visible patches as input; the regressor predicts the representations of the masked patches, which are expected to be aligned with the representations computed from the encoder, using the representations of visible patches and the positions of visible and masked patches; the decoder reconstructs the masked patches from the predicted encoded representations. The CAE design encourages the separation of learning the encoder (representation) from completing the pertaining tasks: masked representation prediction and masked patch reconstruction tasks, and making predictions in the encoded representation space empirically shows the benefit to representation learning. We demonstrate the effectiveness of our CAE through superior transfer performance in downstream tasks: semantic segmentation, object detection and instance segmentation, and classification.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Code link: https://github.com/Atten4Vis/CAE
Author: @charlesCXK
Paper link: https://arxiv.org/abs/2202.03026
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25617/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25617/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/25616
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25616/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25616/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25616/events
|
https://github.com/huggingface/transformers/issues/25616
| 1,858,047,718 |
I_kwDOCUB6oc5uv47m
| 25,616 |
Deberta Model Dimension Mismatch
|
{
"login": "TOP-RX",
"id": 103393767,
"node_id": "U_kgDOBimp5w",
"avatar_url": "https://avatars.githubusercontent.com/u/103393767?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TOP-RX",
"html_url": "https://github.com/TOP-RX",
"followers_url": "https://api.github.com/users/TOP-RX/followers",
"following_url": "https://api.github.com/users/TOP-RX/following{/other_user}",
"gists_url": "https://api.github.com/users/TOP-RX/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TOP-RX/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TOP-RX/subscriptions",
"organizations_url": "https://api.github.com/users/TOP-RX/orgs",
"repos_url": "https://api.github.com/users/TOP-RX/repos",
"events_url": "https://api.github.com/users/TOP-RX/events{/privacy}",
"received_events_url": "https://api.github.com/users/TOP-RX/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,692 | 1,692 | 1,692 |
NONE
| null |
### System Info
transformer: 4.24.0
python: 3.8.13
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hello, I am trying to use Deberta_base to do a simple classification, here is my implementation:
Deberta:
```
class DeBertaClassifier(nn.Module):
def __init__(self, num_classes, dropout, pretrain):
super(DeBertaClassifier, self).__init__()
self.bert = DebertaModel.from_pretrained(pretrain)
self.dropout = nn.Dropout(dropout)
self.linear = nn.Linear(self.bert.config.hidden_size, num_classes)
self.relu = nn.ReLU()
self.dropout = nn.Dropout(dropout)
def forward(self, input_id, mask):
deberta_output = self.bert(input_ids=input_id, attention_mask=mask, return_dict=False, output_hidden_states=True)
hidden_state = deberta_output[0]
pooled_output = hidden_state[:, 0]
dropout_output = self.dropout(pooled_output)
linear_output = self.linear(dropout_output)
return linear_output
```
Training function:
```
def train(model, train_X, train_y, val_X, val_y, learning_rate, epochs, batch_size, adam_epsilon, model_dir):
train, val = Dataset(train_X, train_y), Dataset(val_X, val_y)
train_dataloader = torch.utils.data.DataLoader(train, batch_size=batch_size, shuffle=True)
val_dataloader = torch.utils.data.DataLoader(val, batch_size=batch_size)
use_cuda = torch.cuda.is_available()
device = torch.device("cuda:3" if use_cuda else "cpu")
criterion = nn.CrossEntropyLoss()
optimizer = AdamW(model.parameters(), lr=learning_rate, eps=adam_epsilon)
model = model.to(device)
criterion = criterion.to(device)
valid_acc = 0
for epoch_num in range(epochs):
total_acc_train = 0
total_loss_train = 0
for train_input, train_label in tqdm(train_dataloader):
train_label = train_label.to(device)
mask = train_input['attention_mask'].to(device)
input_id = train_input['input_ids'].squeeze(1).to(device)
output = model(input_id, mask)
batch_loss = criterion(output, train_label)
total_loss_train += batch_loss.item()
acc = (output.argmax(dim=1) == train_label).sum().item()
total_acc_train += acc
model.zero_grad()
batch_loss.backward()
optimizer.step()
total_acc_val = 0
total_loss_val = 0
with torch.no_grad():
for val_input, val_label in val_dataloader:
val_label = val_label.to(device)
mask = val_input['attention_mask'].to(device)
input_id = val_input['input_ids'].squeeze(1).to(device)
output = model(input_id, mask)
batch_loss = criterion(output, val_label)
total_loss_val += batch_loss.item()
acc = (output.argmax(dim=1) == val_label).sum().item()
total_acc_val += acc
print(
f'Epochs: {epoch_num + 1} | Train Loss: {total_loss_train / len(train_X): .4f} \
| Train Accuracy: {total_acc_train / len(train_X): .4f} \
| Val Loss: {total_loss_val / len(val_X): .4f} \
| Val Accuracy: {total_acc_val / len(val_X): .4f}')
```
```
model_path = r'/Deberta_base'
tokenizer = AutoTokenizer.from_pretrained(model_path) # the downloaded model
X = [tokenizer(text, padding='max_length', max_length=128,
truncation=True, return_tensors="pt")
for text in node_text_list]
# Train the model
model = DeBertaClassifier(num_classes=torch.max(y)+1, dropout=args.dropout, pretrain=args.pretrain)
train(model, [X[i] for i in train_idx], y[train_idx], [X[i] for i in valid_idx], y[valid_idx],
args.learning_rate, args.epochs, args.batch_size, args.adam_epsilon, args.model_dir)
```
I have checked the dimension of input_ids is [128, 1, 128], and attentions is [128, 128] given batch size =128. However, it reports the following bugs:
```
Cell In[28], line 31, in train(model, train_X, train_y, val_X, val_y, learning_rate, epochs, batch_size, adam_epsilon, model_dir)
28 mask = train_input['attention_mask'].to(device)
29 input_id = train_input['input_ids'].squeeze(1).to(device)
---> 31 output = model(input_id, mask)
33 batch_loss = criterion(output, train_label)
34 total_loss_train += batch_loss.item()
File [~/anaconda3/envs/giant-xrt/lib/python3.8/site-packages/torch/nn/modules/module.py:1130](https://vscode-remote+ssh-002dremote-002bgr.vscode-resource.vscode-cdn.net/egr/research-dselab/hanhaoy1/proc_data_xrt/LM/~/anaconda3/envs/giant-xrt/lib/python3.8/site-packages/torch/nn/modules/module.py:1130), in Module._call_impl(self, *input, **kwargs)
1126 # If we don't have any hooks, we want to skip the rest of the logic in
1127 # this function, and just call forward.
...
--> 826 embeddings = embeddings * mask
828 embeddings = self.dropout(embeddings)
829 return embeddings
RuntimeError: The size of tensor a (768) must match the size of tensor b (128) at non-singleton dimension 2
```
The bug happened at `deberta_output = self.bert(input_ids=input_id, attention_mask=mask, return_dict=False, output_hidden_states=True)`
I am wondering if I missed something. It won't happen when I use Bert. Any help would be appreciated.
### Expected behavior
Dimensions match correctly.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25616/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25616/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25615
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25615/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25615/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25615/events
|
https://github.com/huggingface/transformers/issues/25615
| 1,857,947,996 |
I_kwDOCUB6oc5uvglc
| 25,615 |
Huggingface Transformers; Polyglot-12.8b (GPT-Neox); You may consider adding `ignore_mismatched_sizes=True` in the model `from_pretrained`
|
{
"login": "codingchild2424",
"id": 45235027,
"node_id": "MDQ6VXNlcjQ1MjM1MDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/45235027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codingchild2424",
"html_url": "https://github.com/codingchild2424",
"followers_url": "https://api.github.com/users/codingchild2424/followers",
"following_url": "https://api.github.com/users/codingchild2424/following{/other_user}",
"gists_url": "https://api.github.com/users/codingchild2424/gists{/gist_id}",
"starred_url": "https://api.github.com/users/codingchild2424/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codingchild2424/subscriptions",
"organizations_url": "https://api.github.com/users/codingchild2424/orgs",
"repos_url": "https://api.github.com/users/codingchild2424/repos",
"events_url": "https://api.github.com/users/codingchild2424/events{/privacy}",
"received_events_url": "https://api.github.com/users/codingchild2424/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey! Not sure we can debug your script for you. You should check your saved models and make sure the configurations are alright. Since you modified the script, you might have better luck on [the forum](https://discuss.huggingface.co/) "
] | 1,692 | 1,692 | 1,692 |
NONE
| null |
### System Info
[What I used]
1. Polyglot-12.8b(GPT-Neox based, https://huggingface.co/EleutherAI/polyglot-ko-12.8b)
2. transformer version: 4.32.0.dev0
3. trainer: transformers run_clm_no_trainer(accelerate) (https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm_no_trainer.py)
4. used Deepspeed zero3
5. I added ignore_mismatched_sizes=True
```
model = AutoModelForCausalLM.from_pretrained(
args.model_name_or_path,
from_tf=bool(".ckpt" in args.model_name_or_path),
config=config,
low_cpu_mem_usage=args.low_cpu_mem_usage,
ignore_mismatched_sizes=True # added
)
```
[What I do]
1. Finetuned Polyglot; it was working.
2. Re-fine-tuning the 1's model, error occured.
```
size mismatch for gpt_neox.layers.38.mlp.dense_h_to_4h.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([20480, 5120]).
size mismatch for gpt_neox.layers.38.mlp.dense_4h_to_h.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([5120, 20480]).
size mismatch for gpt_neox.layers.39.attention.query_key_value.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([15360, 5120]).
size mismatch for gpt_neox.layers.39.attention.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([5120, 5120]).
size mismatch for gpt_neox.layers.39.mlp.dense_h_to_4h.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([20480, 5120]).
size mismatch for gpt_neox.layers.39.mlp.dense_4h_to_h.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([5120, 20480]).
size mismatch for embed_out.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([30003, 5120]).
You may consider adding `ignore_mismatched_sizes=True` in the model `from_pretrained` method.
```
My transformer version is lastest, and I used ignore_mismatched_sizes=True already.
But this error occured.
Can anyone know the solution of this?
### Who can help?
@ArthurZucker
@younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
#!/usr/bin/env python
# coding=utf-8
# Copyright 2021 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Fine-tuning the library models for causal language modeling (GPT, GPT-2, CTRL, ...)
on a text file or a dataset without using HuggingFace Trainer.
Here is the full list of checkpoints on the hub that can be fine-tuned by this script:
https://huggingface.co/models?filter=text-generation
"""
# You can also adapt this script on your own causal language modeling task. Pointers for this are left as comments.
import argparse
import json
import logging
import math
import os
import random
from itertools import chain
from pathlib import Path
import datasets
import torch
from accelerate import Accelerator, DistributedType
from accelerate.logging import get_logger
from accelerate.utils import set_seed
# add for time out
# https://github.com/huggingface/accelerate/issues/314
from accelerate import InitProcessGroupKwargs
from datetime import timedelta
from datasets import load_dataset
from huggingface_hub import Repository, create_repo
from torch.utils.data import DataLoader
from tqdm.auto import tqdm
import transformers
from transformers import (
CONFIG_MAPPING,
MODEL_MAPPING,
AutoConfig,
AutoModelForCausalLM,
AutoTokenizer,
SchedulerType,
default_data_collator,
get_scheduler,
)
from transformers.utils import check_min_version, send_example_telemetry
from transformers.utils.versions import require_version
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
check_min_version("4.32.0.dev0")
logger = get_logger(__name__)
require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/language-modeling/requirements.txt")
MODEL_CONFIG_CLASSES = list(MODEL_MAPPING.keys())
MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES)
def parse_args():
parser = argparse.ArgumentParser(description="Finetune a transformers model on a causal language modeling task")
parser.add_argument(
"--dataset_name",
type=str,
default=None,
help="The name of the dataset to use (via the datasets library).",
)
parser.add_argument(
"--dataset_config_name",
type=str,
default=None,
help="The configuration name of the dataset to use (via the datasets library).",
)
parser.add_argument(
"--train_file", type=str, default=None, help="A csv or a json file containing the training data."
)
parser.add_argument(
"--validation_file", type=str, default=None, help="A csv or a json file containing the validation data."
)
parser.add_argument(
"--validation_split_percentage",
default=5,
help="The percentage of the train set used as validation set in case there's no validation split",
)
parser.add_argument(
"--model_name_or_path",
type=str,
help="Path to pretrained model or model identifier from huggingface.co/models.",
required=False,
)
parser.add_argument(
"--config_name",
type=str,
default=None,
help="Pretrained config name or path if not the same as model_name",
)
parser.add_argument(
"--tokenizer_name",
type=str,
default=None,
help="Pretrained tokenizer name or path if not the same as model_name",
)
parser.add_argument(
"--use_slow_tokenizer",
action="store_true",
help="If passed, will use a slow tokenizer (not backed by the 🤗 Tokenizers library).",
)
parser.add_argument(
"--per_device_train_batch_size",
type=int,
default=8,
help="Batch size (per device) for the training dataloader.",
)
parser.add_argument(
"--per_device_eval_batch_size",
type=int,
default=8,
help="Batch size (per device) for the evaluation dataloader.",
)
parser.add_argument(
"--learning_rate",
type=float,
default=5e-5,
help="Initial learning rate (after the potential warmup period) to use.",
)
parser.add_argument("--weight_decay", type=float, default=0.0, help="Weight decay to use.")
parser.add_argument("--num_train_epochs", type=int, default=3, help="Total number of training epochs to perform.")
parser.add_argument(
"--max_train_steps",
type=int,
default=None,
help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
)
parser.add_argument(
"--gradient_accumulation_steps",
type=int,
default=1,
help="Number of updates steps to accumulate before performing a backward/update pass.",
)
parser.add_argument(
"--lr_scheduler_type",
type=SchedulerType,
default="linear",
help="The scheduler type to use.",
choices=["linear", "cosine", "cosine_with_restarts", "polynomial", "constant", "constant_with_warmup"],
)
parser.add_argument(
"--num_warmup_steps", type=int, default=0, help="Number of steps for the warmup in the lr scheduler."
)
parser.add_argument("--output_dir", type=str, default=None, help="Where to store the final model.")
parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.")
parser.add_argument(
"--model_type",
type=str,
default=None,
help="Model type to use if training from scratch.",
choices=MODEL_TYPES,
)
parser.add_argument(
"--block_size",
type=int,
default=None,
help=(
"Optional input sequence length after tokenization. The training dataset will be truncated in block of"
" this size for training. Default to the model max input length for single sentence inputs (take into"
" account special tokens)."
),
)
parser.add_argument(
"--preprocessing_num_workers",
type=int,
default=None,
help="The number of processes to use for the preprocessing.",
)
parser.add_argument(
"--overwrite_cache", action="store_true", help="Overwrite the cached training and evaluation sets"
)
parser.add_argument(
"--no_keep_linebreaks", action="store_true", help="Do not keep line breaks when using TXT files."
)
parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.")
parser.add_argument(
"--hub_model_id", type=str, help="The name of the repository to keep in sync with the local `output_dir`."
)
parser.add_argument("--hub_token", type=str, help="The token to use to push to the Model Hub.")
parser.add_argument(
"--checkpointing_steps",
type=str,
default=None,
help="Whether the various states should be saved at the end of every n steps, or 'epoch' for each epoch.",
)
parser.add_argument(
"--resume_from_checkpoint",
type=str,
default=None,
help="If the training should continue from a checkpoint folder.",
)
parser.add_argument(
"--with_tracking",
action="store_true",
help="Whether to enable experiment trackers for logging.",
)
parser.add_argument(
"--report_to",
type=str,
default="all",
help=(
'The integration to report the results and logs to. Supported platforms are `"tensorboard"`,'
' `"wandb"`, `"comet_ml"` and `"clearml"`. Use `"all"` (default) to report to all integrations.'
"Only applicable when `--with_tracking` is passed."
),
)
parser.add_argument(
"--low_cpu_mem_usage",
action="store_true",
help=(
"It is an option to create the model as an empty shell, then only materialize its parameters when the pretrained weights are loaded."
"If passed, LLM loading time and RAM consumption will be benefited."
),
)
###########################################################
# for rope_scaling, https://github.com/huggingface/transformers/pull/24653
###########################################################
# parser.add_argument(
# "--use_rope_scaling",
# type=bool,
# default=False
# )
# parser.add_argument(
# "--rope_scaling",
# type=dict,
# default={"type": "dynamic", "factor": 2.0}
# )
args = parser.parse_args()
# Sanity checks
if args.dataset_name is None and args.train_file is None and args.validation_file is None:
raise ValueError("Need either a dataset name or a training/validation file.")
else:
if args.train_file is not None:
extension = args.train_file.split(".")[-1]
assert extension in ["csv", "json", "txt"], "`train_file` should be a csv, json or txt file."
if args.validation_file is not None:
extension = args.validation_file.split(".")[-1]
assert extension in ["csv", "json", "txt"], "`validation_file` should be a csv, json or txt file."
if args.push_to_hub:
assert args.output_dir is not None, "Need an `output_dir` to create a repo when `--push_to_hub` is passed."
return args
def main():
args = parse_args()
# Sending telemetry. Tracking the example usage helps us better allocate resources to maintain them. The
# information sent is the one passed as arguments along with your Python/PyTorch versions.
send_example_telemetry("run_clm_no_trainer", args)
# Initialize the accelerator. We will let the accelerator handle device placement for us in this example.
# If we're using tracking, we also need to initialize it here and it will by default pick up all supported trackers
# in the environment
accelerator_log_kwargs = {}
if args.with_tracking:
accelerator_log_kwargs["log_with"] = args.report_to
accelerator_log_kwargs["project_dir"] = args.output_dir
# add for timeout!!!
# https://github.com/huggingface/accelerate/issues/314
ipg_handler = InitProcessGroupKwargs(
timeout=timedelta(seconds=7400) # 5400 -> 7400
)
accelerator = Accelerator(
kwargs_handlers=[ipg_handler],
gradient_accumulation_steps=args.gradient_accumulation_steps,
**accelerator_log_kwargs
)
# Make one log on every process with the configuration for debugging.
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO,
)
logger.info(accelerator.state, main_process_only=False)
if accelerator.is_local_main_process:
datasets.utils.logging.set_verbosity_warning()
transformers.utils.logging.set_verbosity_info()
else:
datasets.utils.logging.set_verbosity_error()
transformers.utils.logging.set_verbosity_error()
# If passed along, set the training seed now.
if args.seed is not None:
set_seed(args.seed)
# Handle the repository creation
if accelerator.is_main_process:
if args.push_to_hub:
# Retrieve of infer repo_name
repo_name = args.hub_model_id
if repo_name is None:
repo_name = Path(args.output_dir).absolute().name
# Create repo and retrieve repo_id
repo_id = create_repo(repo_name, exist_ok=True, token=args.hub_token).repo_id
# Clone repo locally
repo = Repository(args.output_dir, clone_from=repo_id, token=args.hub_token)
with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore:
if "step_*" not in gitignore:
gitignore.write("step_*\n")
if "epoch_*" not in gitignore:
gitignore.write("epoch_*\n")
elif args.output_dir is not None:
os.makedirs(args.output_dir, exist_ok=True)
accelerator.wait_for_everyone()
# Get the datasets: you can either provide your own CSV/JSON/TXT training and evaluation files (see below)
# or just provide the name of one of the public datasets available on the hub at https://huggingface.co/datasets/
# (the dataset will be downloaded automatically from the datasets Hub).
#
# For CSV/JSON files, this script will use the column called 'text' or the first column if no column called
# 'text' is found. You can easily tweak this behavior (see below).
#
# In distributed training, the load_dataset function guarantee that only one local process can concurrently
# download the dataset.
if args.dataset_name is not None:
# Downloading and loading a dataset from the hub.
raw_datasets = load_dataset(args.dataset_name, args.dataset_config_name)
if "validation" not in raw_datasets.keys():
raw_datasets["validation"] = load_dataset(
args.dataset_name,
args.dataset_config_name,
split=f"train[:{args.validation_split_percentage}%]",
)
raw_datasets["train"] = load_dataset(
args.dataset_name,
args.dataset_config_name,
split=f"train[{args.validation_split_percentage}%:]",
)
else:
data_files = {}
dataset_args = {}
if args.train_file is not None:
data_files["train"] = args.train_file
if args.validation_file is not None:
data_files["validation"] = args.validation_file
extension = args.train_file.split(".")[-1]
if extension == "txt":
extension = "text"
dataset_args["keep_linebreaks"] = not args.no_keep_linebreaks
raw_datasets = load_dataset(extension, data_files=data_files, **dataset_args)
# If no validation data is there, validation_split_percentage will be used to divide the dataset.
if "validation" not in raw_datasets.keys():
raw_datasets["validation"] = load_dataset(
extension,
data_files=data_files,
split=f"train[:{args.validation_split_percentage}%]",
**dataset_args,
)
raw_datasets["train"] = load_dataset(
extension,
data_files=data_files,
split=f"train[{args.validation_split_percentage}%:]",
**dataset_args,
)
# See more about loading any type of standard or custom dataset (from files, python dict, pandas DataFrame, etc) at
# https://huggingface.co/docs/datasets/loading_datasets.html.
# Load pretrained model and tokenizer
#
# In distributed training, the .from_pretrained methods guarantee that only one local process can concurrently
# download model & vocab.
if args.config_name:
config = AutoConfig.from_pretrained(args.config_name)
elif args.model_name_or_path:
config = AutoConfig.from_pretrained(args.model_name_or_path)
else:
config = CONFIG_MAPPING[args.model_type]()
logger.warning("You are instantiating a new config instance from scratch.")
if args.tokenizer_name:
tokenizer = AutoTokenizer.from_pretrained(
args.tokenizer_name,
use_fast=not args.use_slow_tokenizer
)
elif args.model_name_or_path:
tokenizer = AutoTokenizer.from_pretrained(
args.model_name_or_path,
use_fast=not args.use_slow_tokenizer
)
else:
raise ValueError(
"You are instantiating a new tokenizer from scratch. This is not supported by this script."
"You can do it from another script, save it, and load it from here, using --tokenizer_name."
)
if args.model_name_or_path:
# if args.use_rope_scaling:
# model = AutoModelForCausalLM.from_pretrained(
# args.model_name_or_path,
# from_tf=bool(".ckpt" in args.model_name_or_path),
# config=config,
# low_cpu_mem_usage=args.low_cpu_mem_usage,
# rope_scaling={"type": "dynamic", "factor": 2.0}
# )
# else:
model = AutoModelForCausalLM.from_pretrained(
args.model_name_or_path,
from_tf=bool(".ckpt" in args.model_name_or_path),
config=config,
#config=AutoConfig.from_pretrained(args.config_name),
low_cpu_mem_usage=args.low_cpu_mem_usage,
ignore_mismatched_sizes=True # added
)
else:
logger.info("Training new model from scratch")
model = AutoModelForCausalLM.from_config(config)
# We resize the embeddings only when necessary to avoid index errors. If you are creating a model from scratch
# on a small vocab and want a smaller embedding size, remove this test.
embedding_size = model.get_input_embeddings().weight.shape[0]
if len(tokenizer) > embedding_size:
model.resize_token_embeddings(len(tokenizer))
# Preprocessing the datasets.
# First we tokenize all the texts.
column_names = raw_datasets["train"].column_names
text_column_name = "text" if "text" in column_names else column_names[0]
def tokenize_function(examples):
return tokenizer(examples[text_column_name])
with accelerator.main_process_first():
tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=not args.overwrite_cache,
desc="Running tokenizer on dataset",
)
if args.block_size is None:
block_size = tokenizer.model_max_length
if block_size > 1024:
logger.warning(
"The chosen tokenizer supports a `model_max_length` that is longer than the default `block_size` value"
" of 1024. If you would like to use a longer `block_size` up to `tokenizer.model_max_length` you can"
" override this default with `--block_size xxx`."
)
block_size = 1024
else:
if args.block_size > tokenizer.model_max_length:
logger.warning(
f"The block_size passed ({args.block_size}) is larger than the maximum length for the model"
f"({tokenizer.model_max_length}). Using block_size={tokenizer.model_max_length}."
)
block_size = min(args.block_size, tokenizer.model_max_length)
# Main data processing function that will concatenate all texts from our dataset and generate chunks of block_size.
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, and if the total_length < block_size we exclude this batch and return an empty dict.
# We could add padding if the model supported it instead of this drop, you can customize this part to your needs.
total_length = (total_length // block_size) * block_size
# Split by chunks of max_len.
result = {
k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
for k, t in concatenated_examples.items()
}
result["labels"] = result["input_ids"].copy()
return result
# Note that with `batched=True`, this map processes 1,000 texts together, so group_texts throws away a remainder
# for each of those groups of 1,000 texts. You can adjust that batch_size here but a higher value might be slower
# to preprocess.
#
# To speed up this part, we use multiprocessing. See the documentation of the map method for more information:
# https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map
with accelerator.main_process_first():
lm_datasets = tokenized_datasets.map(
group_texts,
batched=True,
num_proc=args.preprocessing_num_workers,
load_from_cache_file=not args.overwrite_cache,
desc=f"Grouping texts in chunks of {block_size}",
)
train_dataset = lm_datasets["train"]
eval_dataset = lm_datasets["validation"]
# Log a few random samples from the training set:
for index in random.sample(range(len(train_dataset)), 3):
logger.info(f"Sample {index} of the training set: {train_dataset[index]}.")
# DataLoaders creation:
train_dataloader = DataLoader(
train_dataset, shuffle=True, collate_fn=default_data_collator, batch_size=args.per_device_train_batch_size
)
eval_dataloader = DataLoader(
eval_dataset, collate_fn=default_data_collator, batch_size=args.per_device_eval_batch_size
)
# Optimizer
# Split weights in two groups, one with weight decay and the other not.
no_decay = ["bias", "layer_norm.weight"]
optimizer_grouped_parameters = [
{
"params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
"weight_decay": args.weight_decay,
},
{
"params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)],
"weight_decay": 0.0,
},
]
optimizer = torch.optim.AdamW(optimizer_grouped_parameters, lr=args.learning_rate)
# Scheduler and math around the number of training steps.
overrode_max_train_steps = False
num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
if args.max_train_steps is None:
args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
overrode_max_train_steps = True
lr_scheduler = get_scheduler(
name=args.lr_scheduler_type,
optimizer=optimizer,
num_warmup_steps=args.num_warmup_steps * args.gradient_accumulation_steps,
num_training_steps=args.max_train_steps * args.gradient_accumulation_steps,
)
# Prepare everything with our `accelerator`.
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
# On TPU, the tie weights in our model have been disconnected, so we need to restore the ties.
if accelerator.distributed_type == DistributedType.TPU:
model.tie_weights()
# We need to recalculate our total training steps as the size of the training dataloader may have changed.
num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
if overrode_max_train_steps:
args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
# Afterwards we recalculate our number of training epochs
args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
# Figure out how many steps we should save the Accelerator states
checkpointing_steps = args.checkpointing_steps
if checkpointing_steps is not None and checkpointing_steps.isdigit():
checkpointing_steps = int(checkpointing_steps)
# We need to initialize the trackers we use, and also store our configuration.
# The trackers initializes automatically on the main process.
if args.with_tracking:
experiment_config = vars(args)
# TensorBoard cannot log Enums, need the raw value
experiment_config["lr_scheduler_type"] = experiment_config["lr_scheduler_type"].value
accelerator.init_trackers("clm_no_trainer", experiment_config)
# Train!
total_batch_size = args.per_device_train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
logger.info("***** Running training *****")
logger.info(f" Num examples = {len(train_dataset)}")
logger.info(f" Num Epochs = {args.num_train_epochs}")
logger.info(f" Instantaneous batch size per device = {args.per_device_train_batch_size}")
logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
logger.info(f" Total optimization steps = {args.max_train_steps}")
# Only show the progress bar once on each machine.
progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process)
completed_steps = 0
starting_epoch = 0
# Potentially load in the weights and states from a previous save
if args.resume_from_checkpoint:
if args.resume_from_checkpoint is not None or args.resume_from_checkpoint != "":
accelerator.print(f"Resumed from checkpoint: {args.resume_from_checkpoint}")
accelerator.load_state(args.resume_from_checkpoint)
path = os.path.basename(args.resume_from_checkpoint)
else:
# Get the most recent checkpoint
dirs = [f.name for f in os.scandir(os.getcwd()) if f.is_dir()]
dirs.sort(key=os.path.getctime)
path = dirs[-1] # Sorts folders by date modified, most recent checkpoint is the last
# Extract `epoch_{i}` or `step_{i}`
training_difference = os.path.splitext(path)[0]
if "epoch" in training_difference:
starting_epoch = int(training_difference.replace("epoch_", "")) + 1
resume_step = None
completed_steps = starting_epoch * num_update_steps_per_epoch
else:
# need to multiply `gradient_accumulation_steps` to reflect real steps
resume_step = int(training_difference.replace("step_", "")) * args.gradient_accumulation_steps
starting_epoch = resume_step // len(train_dataloader)
resume_step -= starting_epoch * len(train_dataloader)
completed_steps = resume_step // args.gradient_accumulation_steps
# update the progress_bar if load from checkpoint
progress_bar.update(completed_steps)
for epoch in range(starting_epoch, args.num_train_epochs):
model.train()
if args.with_tracking:
total_loss = 0
if args.resume_from_checkpoint and epoch == starting_epoch and resume_step is not None:
# We skip the first `n` batches in the dataloader when resuming from a checkpoint
active_dataloader = accelerator.skip_first_batches(train_dataloader, resume_step)
else:
active_dataloader = train_dataloader
for step, batch in enumerate(active_dataloader):
with accelerator.accumulate(model):
# 임시로 token_type_ids key 제거
batch.pop('token_type_ids', None)
outputs = model(**batch)
loss = outputs.loss
# We keep track of the loss at each epoch
if args.with_tracking:
total_loss += loss.detach().float()
accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
# Checks if the accelerator has performed an optimization step behind the scenes
if accelerator.sync_gradients:
progress_bar.update(1)
completed_steps += 1
if isinstance(checkpointing_steps, int):
if completed_steps % checkpointing_steps == 0:
output_dir = f"step_{completed_steps }"
if args.output_dir is not None:
output_dir = os.path.join(args.output_dir, output_dir)
accelerator.save_state(output_dir)
if completed_steps >= args.max_train_steps:
break
model.eval()
losses = []
for step, batch in enumerate(eval_dataloader):
with torch.no_grad():
# 임시로 token_type_ids key 제거
batch.pop('token_type_ids', None)
outputs = model(**batch)
loss = outputs.loss
losses.append(accelerator.gather_for_metrics(loss.repeat(args.per_device_eval_batch_size)))
losses = torch.cat(losses)
try:
eval_loss = torch.mean(losses)
perplexity = math.exp(eval_loss)
except OverflowError:
perplexity = float("inf")
logger.info(f"epoch {epoch}: perplexity: {perplexity} eval_loss: {eval_loss}")
if args.with_tracking:
accelerator.log(
{
"perplexity": perplexity,
"eval_loss": eval_loss,
"train_loss": total_loss.item() / len(train_dataloader),
"epoch": epoch,
"step": completed_steps,
},
step=completed_steps,
)
if args.push_to_hub and epoch < args.num_train_epochs - 1:
accelerator.wait_for_everyone()
unwrapped_model = accelerator.unwrap_model(model)
unwrapped_model.save_pretrained(
args.output_dir, is_main_process=accelerator.is_main_process, save_function=accelerator.save
)
if accelerator.is_main_process:
tokenizer.save_pretrained(args.output_dir)
repo.push_to_hub(
commit_message=f"Training in progress epoch {epoch}", blocking=False, auto_lfs_prune=True
)
if args.checkpointing_steps == "epoch":
output_dir = f"epoch_{epoch}"
if args.output_dir is not None:
output_dir = os.path.join(args.output_dir, output_dir)
accelerator.save_state(output_dir)
if args.with_tracking:
accelerator.end_training()
if args.output_dir is not None:
accelerator.wait_for_everyone()
unwrapped_model = accelerator.unwrap_model(model)
unwrapped_model.save_pretrained(
args.output_dir, is_main_process=accelerator.is_main_process, save_function=accelerator.save
)
if accelerator.is_main_process:
tokenizer.save_pretrained(args.output_dir)
if args.push_to_hub:
repo.push_to_hub(commit_message="End of training", auto_lfs_prune=True)
with open(os.path.join(args.output_dir, "all_results.json"), "w") as f:
json.dump({"perplexity": perplexity}, f)
if __name__ == "__main__":
main()
```
### Expected behavior
retrain
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25615/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25615/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25614
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25614/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25614/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25614/events
|
https://github.com/huggingface/transformers/issues/25614
| 1,857,846,652 |
I_kwDOCUB6oc5uvH18
| 25,614 |
Add MovieChat Model
|
{
"login": "kumar-devesh",
"id": 76114246,
"node_id": "MDQ6VXNlcjc2MTE0MjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/76114246?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kumar-devesh",
"html_url": "https://github.com/kumar-devesh",
"followers_url": "https://api.github.com/users/kumar-devesh/followers",
"following_url": "https://api.github.com/users/kumar-devesh/following{/other_user}",
"gists_url": "https://api.github.com/users/kumar-devesh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kumar-devesh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kumar-devesh/subscriptions",
"organizations_url": "https://api.github.com/users/kumar-devesh/orgs",
"repos_url": "https://api.github.com/users/kumar-devesh/repos",
"events_url": "https://api.github.com/users/kumar-devesh/events{/privacy}",
"received_events_url": "https://api.github.com/users/kumar-devesh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false | null |
[] |
[
"I would like to work towards adding this model to huggingface transformers @NielsRogge @amyeroberts ",
"@kumar-devesh Great! Feel free to open a PR and tag us or @rafaelpadilla when it's ready for review. Let us know if you have any questions about adding the model to the library.",
"which models can I use as a template for converting this model to huggingface format? @amyeroberts ",
"Hi @kumar-devesh, 🙂\r\n\r\n[Here](https://huggingface.co/docs/transformers/main/en/add_new_model) you can find a useful documentation explaining the expected structure for a newly added model.\r\n\r\nMovieChat seems close to MultiModal (Visual Question Answering) models. There's a list of MultiModal models on the left menu [on this page](https://huggingface.co/docs/transformers/main/en/index) that you could use as reference.\r\n",
"Hi @amyeroberts @rafaelpadilla @kumar-devesh Let me know if I can pick this up?",
"Hi @Dev-Khant i am working on this issue currently :)\r\n",
"@amyeroberts @rafaelpadilla implementation for the model uses opencv and decord for reading videos from paths. When trying to pass a partial hf port model a video tensor, the frames decoded are not the same and lead to slightly different text output for the models. The conversion scripts i went through perform inference on the author's code and the hf port to verify similarity in outputs. Is there any workaround for this?\r\n\r\nModel outputs:\r\nhf model: `In the video, a man is standing in a kitchen with his arms outstretched in front of him. He is standing in front of a white sink and appears to be preparing food`\r\noriginal model: `In the video, a man is standing in a kitchen with his arms outstretched in front of him. He is standing next to a stainless steel kitchen sink.`\r\n",
"> Hi @Dev-Khant i am working on this issue currently :)\n> \n\nOK cool @kumar-devesh :) ",
"Hi @kumar-devesh ,\r\n\r\nThank you for raising this question. :) \r\n\r\nIf I understood your question correctly, the new model should work in the frame/image level, so your model itself shouldn't worry about extracting the frames. \r\n\r\nYour preprocessing should receive an `ImageInput` object, containing the already extracted frames, which could be done with opencv, decord, ffmpeg, etc. So, for that, you can use the same library adopted by the official code.\r\n\r\nYou can use similar models in the `transformers` as reference, like `timesformer`,`videomae`, `vivit`, etc.\r\n\r\nI hope that clarifies your question."
] | 1,692 | 1,696 | null |
NONE
| null |
### Model description
MovieChat proposes a Vision Foundation model + LLM + Long short-term memory-based solution to long-range video understanding addressing computation, memory, and long-range temporal understanding challenges using transformer model tokens as a fixed amount of memory.
Project Page: https://rese1f.github.io/MovieChat/
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Paper: https://arxiv.org/abs/2307.16449
Github: https://github.com/rese1f/MovieChat
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25614/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25614/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/25613
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25613/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25613/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25613/events
|
https://github.com/huggingface/transformers/issues/25613
| 1,857,802,645 |
I_kwDOCUB6oc5uu9GV
| 25,613 |
ValueError: signal only works in main thread of the main interpreter
|
{
"login": "aortiz-WW",
"id": 118770354,
"node_id": "U_kgDOBxRKsg",
"avatar_url": "https://avatars.githubusercontent.com/u/118770354?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aortiz-WW",
"html_url": "https://github.com/aortiz-WW",
"followers_url": "https://api.github.com/users/aortiz-WW/followers",
"following_url": "https://api.github.com/users/aortiz-WW/following{/other_user}",
"gists_url": "https://api.github.com/users/aortiz-WW/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aortiz-WW/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aortiz-WW/subscriptions",
"organizations_url": "https://api.github.com/users/aortiz-WW/orgs",
"repos_url": "https://api.github.com/users/aortiz-WW/repos",
"events_url": "https://api.github.com/users/aortiz-WW/events{/privacy}",
"received_events_url": "https://api.github.com/users/aortiz-WW/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"While the error is not informative (should be fixed in the PR mentioned above) this is because you are using a model using code not in the Transformers library so you need to set `trust_remote_code=True` in your call to `from_pretrained` after making sure the code in the folder `falcon` does not contain anything malicious.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,692 | 1,695 | 1,695 |
NONE
| null |
### System Info
My requirements.txt is this:
accelerate==0.21.0
aiohttp==3.8.5
aiosignal==1.3.1
altair==5.0.1
anyio==3.7.1
async-timeout==4.0.3
attrs==23.1.0
backoff==2.2.1
beautifulsoup4==4.12.2
bitsandbytes==0.41.1
blinker==1.6.2
bs4==0.0.1
cachetools==5.3.1
certifi==2023.7.22
cffi==1.15.1
charset-normalizer==3.2.0
chroma-hnswlib==0.7.2
chromadb==0.4.6
click==8.1.7
cmake==3.27.2
coloredlogs==15.0.1
cryptography==41.0.3
dataclasses-json==0.5.14
einops==0.6.1
exceptiongroup==1.1.3
fastapi==0.99.1
filelock==3.12.2
flatbuffers==23.5.26
frozenlist==1.4.0
fsspec==2023.6.0
gitdb==4.0.10
GitPython==3.1.32
greenlet==2.0.2
h11==0.14.0
httptools==0.6.0
huggingface-hub==0.16.4
humanfriendly==10.0
idna==3.4
importlib-metadata==6.8.0
importlib-resources==6.0.1
Jinja2==3.1.2
joblib==1.3.2
jsonschema==4.19.0
jsonschema-specifications==2023.7.1
langchain==0.0.267
langsmith==0.0.24
lit==16.0.6
markdown-it-py==3.0.0
MarkupSafe==2.1.3
marshmallow==3.20.1
mdurl==0.1.2
monotonic==1.6
mpmath==1.3.0
multidict==6.0.4
mypy-extensions==1.0.0
networkx==3.1
nltk==3.8.1
numexpr==2.8.5
numpy==1.25.2
nvidia-cublas-cu11==11.10.3.66
nvidia-cuda-cupti-cu11==11.7.101
nvidia-cuda-nvrtc-cu11==11.7.99
nvidia-cuda-runtime-cu11==11.7.99
nvidia-cudnn-cu11==8.5.0.96
nvidia-cufft-cu11==10.9.0.58
nvidia-curand-cu11==10.2.10.91
nvidia-cusolver-cu11==11.4.0.1
nvidia-cusparse-cu11==11.7.4.91
nvidia-nccl-cu11==2.14.3
nvidia-nvtx-cu11==11.7.91
onnxruntime==1.15.1
openapi-schema-pydantic==1.2.4
overrides==7.4.0
packaging==23.1
pandas==2.0.3
pdfminer.six==20221105
Pillow==9.5.0
posthog==3.0.2
protobuf==4.24.0
psutil==5.9.5
pulsar-client==3.2.0
pyarrow==12.0.1
pycparser==2.21
pydantic==1.10.12
pydeck==0.8.0
Pygments==2.16.1
Pympler==1.0.1
PyPika==0.48.9
python-dateutil==2.8.2
python-dotenv==1.0.0
pytz==2023.3
pytz-deprecation-shim==0.1.0.post0
PyYAML==6.0.1
referencing==0.30.2
regex==2023.8.8
requests==2.31.0
rich==13.5.2
rpds-py==0.9.2
safetensors==0.3.2
scikit-learn==1.3.0
scipy==1.11.2
sentence-transformers==2.2.2
sentencepiece==0.1.99
six==1.16.0
smmap==5.0.0
sniffio==1.3.0
soupsieve==2.4.1
SQLAlchemy==2.0.20
starlette==0.27.0
streamlit==1.25.0
sympy==1.12
tenacity==8.2.3
threadpoolctl==3.2.0
tokenizers==0.13.3
toml==0.10.2
toolz==0.12.0
torch==2.0.1
torchvision==0.15.2
tornado==6.3.3
tqdm==4.66.1
transformers==4.31.0
triton==2.0.0
typing-inspect==0.9.0
typing_extensions==4.7.1
tzdata==2023.3
tzlocal==4.3.1
urllib3==2.0.4
uvicorn==0.23.2
uvloop==0.17.0
validators==0.21.2
watchdog==3.0.0
watchfiles==0.19.0
websockets==11.0.3
yarl==1.9.2
zipp==3.16.2
I am following this video tutorial: https://www.youtube.com/watch?v=rIV1EseKwU4
In the app.py I have this following code:
```
import streamlit as st
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from transformers import pipeline
import torch
import base64
import textwrap
from langchain.embeddings import HuggingFaceInstructEmbeddings
from langchain.vectorstores import Chroma
from langchain.chains import RetrievalQA
from langchain.llms import HuggingFacePipeline
from constants import CHROMA_SETTINGS
checkpoint = "falcon-40b/"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(
checkpoint,
device_map='auto',
torch_dtype=torch.float32
)
@st.cache_resource
def llm_pipeline():
pipe = pipeline(
"text2text-generation",
model = model,
tokenizer=tokenizer,
max_length = 1024,
do_sample = True,
temperature = 0.1,
top_p = 0.95
)
local_llm = HuggingFacePipeline(pipeline = pipe)
return local_llm
@st.cache_resource
def qa_llm():
llm = llm_pipeline()
embeddings = HuggingFaceInstructEmbeddings(model_name="all-MiniLM-L6-v2")
db = Chroma(persist_directory="db", embedding_function=embeddings, client_settings=CHROMA_SETTINGS)
retriever = db.as_retriever()
qa = RetrievalQA.from_chain_type(
llm = llm,
chain_type='stuff',
retriever=retriever,
return_source_document = True
)
return qa
def process_answer(instruction):
response = ""
instruction = instruction
qa = qa_llm()
generate_text = qa(instruction)
answer = generate_text('result')
return answer, generate_text
def main():
st.title('Search Your PDF')
with st.expander('About App'):
st.markdown(
"""
This a generative AI powered question answering app that responds to question about your PDFs
"""
)
question = st.text_area('Enter Your Question')
if st.button('Search'):
st.info('Your question: ' + question)
st.info('Your Answer: ')
answer, metadata = process_answer(question)
st.write(answer)
st.write(metadata)
if __name__=="__main__":
main()
```
The error I am receiving is this:
```
ValueError: signal only works in main thread of the main interpreter
Traceback:
File "/home/aortiz/chat-PDF/.venv/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script
exec(code, module.__dict__)
File "/home/aortiz/chat-PDF/app.py", line 15, in <module>
model = AutoModelForSeq2SeqLM.from_pretrained(
File "/home/aortiz/chat-PDF/.venv/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 461, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "/home/aortiz/chat-PDF/.venv/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 986, in from_pretrained
trust_remote_code = resolve_trust_remote_code(
File "/home/aortiz/chat-PDF/.venv/lib/python3.10/site-packages/transformers/dynamic_module_utils.py", line 535, in resolve_trust_remote_code
signal.signal(signal.SIGALRM, _raise_timeout_error)
File "/usr/lib/python3.10/signal.py", line 56, in signal
handler = _signal.signal(_enum_to_int(signalnum), _enum_to_int(handler))
```
### Who can help?
@ArthurZucker , @younesbelkada , @sgugger , @stevhliu , @MKhalusova
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Follow this video instructions to reproduce the problem:
https://www.youtube.com/watch?v=rIV1EseKwU4
### Expected behavior
It is supposed to be running smoothly on streamlit and I should be able to open the page on the webbrowser
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25613/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25613/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25612
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25612/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25612/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25612/events
|
https://github.com/huggingface/transformers/pull/25612
| 1,857,768,857 |
PR_kwDOCUB6oc5YTebk
| 25,612 |
Add Blip2ForImageTextRetrieval for multimodal feature extraction
|
{
"login": "jpizarrom",
"id": 111236,
"node_id": "MDQ6VXNlcjExMTIzNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/111236?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jpizarrom",
"html_url": "https://github.com/jpizarrom",
"followers_url": "https://api.github.com/users/jpizarrom/followers",
"following_url": "https://api.github.com/users/jpizarrom/following{/other_user}",
"gists_url": "https://api.github.com/users/jpizarrom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jpizarrom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jpizarrom/subscriptions",
"organizations_url": "https://api.github.com/users/jpizarrom/orgs",
"repos_url": "https://api.github.com/users/jpizarrom/repos",
"events_url": "https://api.github.com/users/jpizarrom/events{/privacy}",
"received_events_url": "https://api.github.com/users/jpizarrom/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"- The weights of the original blip2 itm model are converted into Blip2ForImageTextRetrieval.\r\n- the features are extracted following https://github.com/salesforce/LAVIS/blob/main/lavis/models/blip2_models/blip2_qformer.py#L418",
"cc @amyeroberts and @rafaelpadilla !",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25612). All of your documentation changes will be reflected on that endpoint.",
"@jpizarrom Thanks for opening this PR! Let us know when it's ready for review :) ",
"Hi @amyeroberts \r\nCould you please help me :)\r\nI am getting this error in ci/circleci.\r\n```\r\nFAILED tests/utils/test_hub_utils.py::GetFromCacheTests::test_get_file_gated_repo - AssertionError: OSError not raised\r\nFAILED tests/utils/test_hub_utils.py::GetFromCacheTests::test_has_file_gated_repo - AssertionError: OSError not raised\r\n```\r\nDo you know if it could be related to my PR, some kind of side effect, or is this error not related to my PR?\r\nThanks",
"> Hi @amyeroberts Could you please help me :) I am getting this error in ci/circleci.\r\n> \r\n> ```\r\n> FAILED tests/utils/test_hub_utils.py::GetFromCacheTests::test_get_file_gated_repo - AssertionError: OSError not raised\r\n> FAILED tests/utils/test_hub_utils.py::GetFromCacheTests::test_has_file_gated_repo - AssertionError: OSError not raised\r\n> ```\r\n> \r\n> Do you know if it could be related to my PR, some kind of side effect, or is this error not related to my PR? Thanks\r\n\r\nIt looks the issue is not related with my changes, i just tried using the main branch\r\n```\r\nfrom huggingface_hub import hf_hub_download\r\nhf_hub_download(\"hf-internal-testing/dummy-gated-model\", \"README.md\") # don't fail\r\nhf_hub_download(\"hf-internal-testing/dummy-gated-model\", \"otherfile\") # error: Cannot access gated repo for url...\r\n```\r\n\r\nthe test https://github.com/huggingface/transformers/blob/main/tests/utils/test_hub_utils.py#L131 is trying to get the README.md, and expect an exception\r\n",
"Hi @jpizarrom, yes there was a recent issue that resulted in some of the hub tests failing unfortunately. Rest assured they are not related to your PR :) For the moment you can ignore these tests. When you rebase on main they should be resolved. ",
"Hi @amyeroberts and @rafaelpadilla , could you please help me? :)\r\n\r\nThis PR is working, I still need to add more tests, but I would love to get your feedback about whether is it fine that the methods get_text_features and get_image_features were added to the proposed new class Blip2ForImageTextRetrieval. Or the logic should be added to Blip2Model, and extend Blip2Model to support also the feature extraction of models without t5/opt language models, but with text and vision protections. \r\n\r\nAt the moment there is no huggingface model similar to the original [Blip2Qformer/blip2](https://github.com/salesforce/LAVIS/blob/e4040b13d6120062829ee9625f016f3cd3dd16e6/lavis/models/blip2_models/blip2_qformer.py#L27) model with lang and visual projections, the current [huggingface Blip2Model](https://github.com/huggingface/transformers/blob/960807f62e53676723ab8281019219864ef3db4d/src/transformers/models/blip_2/modeling_blip_2.py#L1202) seems to be more related to the original [Blip2OPT](https://github.com/salesforce/LAVIS/blob/e4040b13d6120062829ee9625f016f3cd3dd16e6/lavis/models/blip2_models/blip2_opt.py#L22C7-L22C15)/[Blip2T5](https://github.com/salesforce/LAVIS/blob/e4040b13d6120062829ee9625f016f3cd3dd16e6/lavis/models/blip2_models/blip2_t5.py#L20C7-L20C14)\r\n\r\n",
"@jpizarrom My suggestion would be to extend Blip2Model as it already has `get_text_features` and `get_image_features`. Similarly, other retrieval models e.g. [ViltForImageTextRetrieval](https://github.com/huggingface/transformers/blob/3b39b906183ed08d9961908eb73104aeea345d11/src/transformers/models/vilt/modeling_vilt.py#L1180) don't have these methods implemented. I don't believe there's any reason why we couldn't also add these methods to `Blip2ForImageTextRetrieval` as well if you think it makes more sense - there's just a maintenance cost, as we can't guarantee and changes in the implementation in one class will be correctly updated in all places: adding tests to guard against this would be ideal. \r\n\r\n",
"@jpizarrom From next week I'm going to be away for a few weeks. If you have any questions, please ask @rafaelpadilla ",
"Hi @rafaelpadilla, I would appreciate to receive your feedback about this PR,\r\nas recommended in https://github.com/huggingface/transformers/pull/25612#issuecomment-1701012258, I started to extend the`get_text_features` and `get_image_features` in `Blip2Model` to try to support when the models has qformer with `vision_proj` and `text_proj`, and not extra language_model (original [Blip2Qformer/blip2](https://github.com/salesforce/LAVIS/blob/e4040b13d6120062829ee9625f016f3cd3dd16e6/lavis/models/blip2_models/blip2_qformer.py#L27), more context in https://github.com/huggingface/transformers/pull/25612#issuecomment-1694434022), but my PR is adding many if/else in Blip2Model to check whether it correspond to the model with/without the language_model(opt/t5).\r\n\r\nThe clip model has two classes for the cases with and without projections, `CLIPVisionModel` and `CLIPVisionModelWithProjection` respectively.\r\n\r\nWhat do you think should be the strategy to follow in this PR?\r\n- How is it currently done in this PR, extend `Blip2Model` to support both types of models, and do some refactoring to make it nicer?\r\n- add the get features methods to the new classes `Blip2ForImageTextRetrieval`, this way there will be get features methods in `Blip2Model` and also `Blip2ForImageTextRetrieval`.\r\n- maybe add the get features methods to another new class `Blip2ModelWithProjection`\r\n\r\nThanks\r\n\r\n",
"Hi @jpizarrom,\r\n\r\nThank you for your contribution! :) I have just taken a look at your PR and it seems to me that the best strategy would be your first suggestion. IMO, your if/elses may not be a problem.\r\n\r\n@ArthurZucker , what do you think the best strategy would be?",
"Hi @rafaelpadilla @ArthurZucker may you please review this PR?\r\n\r\n`Blip2ModelWithProjection` and `Blip2ForImageTextRetrieval` were added, more context in https://github.com/huggingface/transformers/pull/25612#issuecomment-1722507879\r\n\r\n@NielsRogge wdyt about this PR?\r\n\r\nThanks\r\n\r\n",
"I was comparing the structure of the code with CLIP and noticed that:\r\n\r\nHere there's only one ModelWithProjection class `Blip2ModelWithProjection`, which deals with embeddings of both text and image. However, for other models, and particularly for CLIP, there are `CLIPTextModelWithProjection` and `CLIPVisionModelWithProjection`.\r\n\r\nTo keep consistency with other models, would it be possible to break `Blip2ModelWithProjection` into `Blip2TextModelWithProjections` and `Blip2VisionModelWithProkections`?",
"@rafaelpadilla thanks for the feedback, it could be possible to break the `Blip2ModelWithProjection` `get_text_features` and `get_image_features` into `Blip2TextModelWithProjections` and `Blip2VisionModelWithProkections` to follow CLIP model structure, but I believe an implementation of [Blip2Qformer.forward](https://github.com/salesforce/LAVIS/blob/main/lavis/models/blip2_models/blip2_qformer.py#L90) is still needed, that was the reason why i was trying to implement it in HF as `Blip2ModelWithProjection` with the methods `get_text_features`,`get_image_features`,`forward` following [BlipModel](https://github.com/huggingface/transformers/blob/64845307b362f4dfff1a783d7bce0f3407e92c34/src/transformers/models/blip/modeling_blip.py#L726)\r\n\r\nmaybe `Blip2ModelWithoutLM` could be a better class name instead of `Blip2ModelWithProjection`,\r\n[Blip2Qformer.forward](https://github.com/salesforce/LAVIS/blob/main/lavis/models/blip2_models/blip2_qformer.py#L90) is used to do the pretraining stage 1",
"> @rafaelpadilla thanks for the feedback, it could be possible to break the `Blip2ModelWithProjection` `get_text_features` and `get_image_features` into `Blip2TextModelWithProjections` and `Blip2VisionModelWithProkections` to follow CLIP model structure, but I believe an implementation of [Blip2Qformer.forward](https://github.com/salesforce/LAVIS/blob/main/lavis/models/blip2_models/blip2_qformer.py#L90) is still needed, that was the reason why i was trying to implement it in HF as `Blip2ModelWithProjection` with the methods `get_text_features`,`get_image_features`,`forward` following [BlipModel](https://github.com/huggingface/transformers/blob/64845307b362f4dfff1a783d7bce0f3407e92c34/src/transformers/models/blip/modeling_blip.py#L726)\r\n> \r\n> maybe `Blip2ModelWithoutLM` could be a better class name instead of `Blip2ModelWithProjection`, [Blip2Qformer.forward](https://github.com/salesforce/LAVIS/blob/main/lavis/models/blip2_models/blip2_qformer.py#L90) is used to do the pretraining stage 1\r\n\r\nHi @rafaelpadilla \r\n`Blip2TextModelWithProjection` and `Blip2VisionModelWithProjection` were added, and `Blip2ModelWithProjection` was renamed into `Blip2ModelWithoutLM`/`Blip2ModelWithoutLMConfig`\r\nwdyt?",
"Hi, I was looking into the huggingface models code, and I found that maybe `Blip2ModelWithoutLM`( original [Blip2Qformer.forward](https://github.com/salesforce/LAVIS/blob/main/lavis/models/blip2_models/blip2_qformer.py#L90) used to do the pretraining stage 1) could be conceptually more related to XXXForPreTraining, like `BertForPreTraining`, so a more appropriate name could be `Blip2ForPreTraining` or so?\r\n\r\nAnother option could be to remove `Blip2ModelWithoutLM` from this PR, and then open a new PR specific for the pretraining models, and just focus this PR on the weight conversion and inference models., what do you think?",
"Hi @younesbelkada, this PR has been updated following your advice, and is ready for a review. Thanks\r\n\r\n- Blip2ModelWithoutLMConfig was removed, now all the new modes are using Blip2Config\r\n- Blip2ModelWithProjection model was removed(it was added in a previous commit in this PR), could be added later in other PR, such a model could be used for pre-training.\r\n",
"> Thanks for your great contribution! Think looks much cleaner now! Thanks also for pushing some converted model weights on the Hub! Would you be able to run all blip2 slow tests and confirm they pass? `RUN_SLOW=1 pytest tests/models/blip_2/` Let's also add a logits tests in the testing suite you added!\r\n\r\nHi, I have made the recommended changes, answered directly in the comments of your reviews.\r\n\r\nslow tests passed, `RUN_SLOW=1 pytest tests/models/blip_2/`\r\n\r\nnew doctest passed too\r\n` pytest --doctest-modules src/transformers/models/blip_2/modeling_blip_2.py::transformers.models.blip_2.modeling_blip_2.Blip2ForImageTextRetrieval.forward`\r\n\r\n` pytest --doctest-modules src/transformers/models/blip_2/modeling_blip_2.py::transformers.models.blip_2.modeling_blip_2.Blip2TextModelWithProjection.forward`\r\n\r\n`pytest --doctest-modules src/transformers/models/blip_2/modeling_blip_2.py::transformers.models.blip_2.modeling_blip_2.Blip2VisionModelWithProjection.forward`\r\n",
"cc @amyeroberts if you could review (@younesbelkada is off!)\r\nAnd sorry @jpizarrom for the wait",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @amyeroberts,\r\n\r\nNow I would like to continue working on the comments received in December, how can I reopen this PR or should i create a new PR?\r\nThanks",
"Hey, @jpizarrom, very nice work! I'm also interested in using BLIP 2 for image-text retrieval, specifically finding relevant images for a text query. (I'm just a regular user, not from HF.)\r\n\r\nI understand that this PR is WIP, but it seems that it's in its final stages, so I want to give my feedback to help with testing.\r\n\r\nWhen I try to pass multiple images to the model, I get an error. Is this a valid use case, or is the class intended for single image + multiple labels usage?\r\n\r\nCode:\r\n```python\r\nimport requests\r\nfrom PIL import Image\r\nfrom transformers import Blip2ForImageTextRetrieval, Blip2Processor\r\nfrom transformers.testing_utils import torch_device\r\n\r\ndef prepare_img():\r\n url = \"https://huggingface.co/hf-internal-testing/blip-test-image/resolve/main/demo.jpg\"\r\n image = Image.open(requests.get(url, stream=True).raw)\r\n return image\r\n\r\n\r\nmodel_name = \"jpizarrom/blip2-itm-vit-g\"\r\nprocessor = Blip2Processor.from_pretrained(model_name)\r\nmodel = Blip2ForImageTextRetrieval.from_pretrained(model_name).to(torch_device)\r\n\r\nimages = [prepare_img(), prepare_img()]\r\ntext = \"A woman and her dog sitting in a beach\"\r\ninputs = processor(images=images, text=text, return_tensors=\"pt\").to(torch_device)\r\n\r\nout = model(**inputs)\r\n```\r\n\r\nError:\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/user/projects/transofmers-blip2-itm/test_multiple_images.py\", line 22, in <module>\r\n out = model(**inputs)\r\n ^^^^^^^^^^^^^^^\r\n File \"/home/user/projects/transofmers-blip2-itm/venv/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1511, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/user/projects/transofmers-blip2-itm/venv/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1520, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/user/projects/transofmers-blip2-itm/src/transformers/models/blip_2/modeling_blip_2.py\", line 2363, in forward\r\n attention_mask = torch.cat([query_attention_mask, attention_mask], dim=1)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nRuntimeError: Sizes of tensors must match except in dimension 1. Expected size 2 but got size 1 for tensor number 1 in the list.\r\n```",
"Hi @gleb-akhmerov \r\nI think you could use Blip2ForImageTextRetrieval to match each text with each img, the length of both arrays should be the same\r\n```python\r\n# %%\r\nimport requests\r\nimport torch\r\nfrom PIL import Image\r\nfrom transformers import Blip2ForImageTextRetrieval, Blip2Processor\r\nfrom transformers.testing_utils import torch_device\r\n\r\n# %%\r\ndef prepare_img():\r\n url = \"https://huggingface.co/hf-internal-testing/blip-test-image/resolve/main/demo.jpg\"\r\n image = Image.open(requests.get(url, stream=True).raw)\r\n return image\r\n\r\n# %%\r\nmodel_name = \"jpizarrom/blip2-itm-vit-g\"\r\nprocessor = Blip2Processor.from_pretrained(model_name)\r\nmodel = Blip2ForImageTextRetrieval.from_pretrained(model_name).to(torch_device)\r\n\r\n# %%\r\n\r\nimages = [prepare_img(), prepare_img()]\r\ntext = \"A woman and her dog sitting in a beach\"\r\ntext_other = \"A woman and her dog in a beach\"\r\n\r\n\r\n# %%\r\ninputs = processor(images=images, text=[text,text_other], return_tensors=\"pt\", padding=True).to(torch_device)\r\n\r\n# %%\r\nitm_out = model(**inputs, use_itm_head=True)\r\nitm_scores = torch.nn.functional.softmax(itm_out.itm_score, dim=1)\r\nprint(f'The image and text are matched with a probability of {itm_scores[:, 1].tolist()}')\r\n\r\n# %%\r\nitc_out = model(**inputs, use_itm_head=False)\r\nprint(f'The image feature and text feature has a cosine similarity of {itc_out.itm_score.tolist()}')\r\n```\r\nor you can get image and text projections, then compare all images with the text\r\n\r\n```python\r\n# %%\r\nimport requests\r\nimport torch\r\nfrom PIL import Image\r\nfrom transformers import Blip2TextModelWithProjection, Blip2VisionModelWithProjection, AutoProcessor\r\nfrom transformers.testing_utils import torch_device\r\n\r\n# %%\r\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\r\n\r\n# %%\r\ndef prepare_img():\r\n url = \"https://huggingface.co/hf-internal-testing/blip-test-image/resolve/main/demo.jpg\"\r\n image = Image.open(requests.get(url, stream=True).raw)\r\n return image\r\n\r\n# %%\r\nmodel_name = \"jpizarrom/blip2-itm-vit-g\"\r\nprocessor = AutoProcessor.from_pretrained(\"jpizarrom/blip2-itm-vit-g\")\r\nvision_model = Blip2VisionModelWithProjection.from_pretrained(model_name).to(device) \r\ntext_model = Blip2TextModelWithProjection.from_pretrained(model_name).to(device) \r\n\r\n# %%\r\nimages = [prepare_img(), prepare_img()]\r\ntext = \"A woman and her dog sitting in a beach\"\r\n\r\n# %%\r\nvision_inputs = processor(images=images, return_tensors=\"pt\").to(torch_device)\r\nvision_out = vision_model(**vision_inputs)\r\n# out\r\n\r\n# %%\r\ntext_inputs = processor(text=text, return_tensors=\"pt\").to(torch_device)\r\ntext_out = text_model(**text_inputs)\r\n\r\n# %%\r\nprint(vision_out.image_embeds.shape, text_out.text_embeds.shape)\r\n\r\n# %%\r\nmax_scores, max_classes = (vision_out.image_embeds @ text_out.text_embeds[:,0,:].t()).max(dim=1)\r\n\r\n# %%\r\nprint(max_scores)\r\n```",
"Hi @jpizarrom, I can't reopen this PR as something has happened upstream since closing: either the branch has had a force push or it's been recreated. \r\n\r\nYou can open a new PR and link to this one for reference. "
] | 1,692 | 1,707 | 1,704 |
CONTRIBUTOR
| null |
# What does this PR do?
Add Blip2ForImageTextRetrieval model to be able to extract text,image,multimodal. similar to extract_features method in the original implementation.
Fixes part of #25300 #25245
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @NielsRogge
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
# TODOs:
- [x] Convert original weights from Blip2 ITM
- [x] New model should return the same feature vectors as the original model
- [x] Add forward method
- [x] Add extract feature methods (code was previously added in the forward method)
- [x] Add `Blip2TextRetrievalModelTest`
- [x] Refactor to try to add feature extractor logic into `Blip2ModelWithProjection`
- [x] use float16 tests
- [x] use float16 in doctest
- [x] add `Blip2TextModelWithProjection` and `Blip2VisionModelWithProjection`
- [x] add text_config=None support in `Blip2Config`, remove `Blip2ModelWithoutLMConfig`
- [ ] change model name from _jpizarrom/xxxx_ to _Salesforce/xxx_ ?
- [x] remove Blip2TextModelWithProjection
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25612/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25612/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25612",
"html_url": "https://github.com/huggingface/transformers/pull/25612",
"diff_url": "https://github.com/huggingface/transformers/pull/25612.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25612.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25611
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25611/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25611/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25611/events
|
https://github.com/huggingface/transformers/issues/25611
| 1,857,685,697 |
I_kwDOCUB6oc5uugjB
| 25,611 |
special token ids are different in LlamaTokenizerFast
|
{
"login": "x54-729",
"id": 45304952,
"node_id": "MDQ6VXNlcjQ1MzA0OTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/45304952?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/x54-729",
"html_url": "https://github.com/x54-729",
"followers_url": "https://api.github.com/users/x54-729/followers",
"following_url": "https://api.github.com/users/x54-729/following{/other_user}",
"gists_url": "https://api.github.com/users/x54-729/gists{/gist_id}",
"starred_url": "https://api.github.com/users/x54-729/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/x54-729/subscriptions",
"organizations_url": "https://api.github.com/users/x54-729/orgs",
"repos_url": "https://api.github.com/users/x54-729/repos",
"events_url": "https://api.github.com/users/x54-729/events{/privacy}",
"received_events_url": "https://api.github.com/users/x54-729/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Could you try using #23909. I believe the order of token addition is wrong. ",
"> Could you try using #23909. I believe the order of token addition is wrong.\r\n\r\nI'm sorry but could you please tell about more details about the solution? I didn't call `add_special_tokens` and just loaded the tokenizer model in this case.",
"That's the problem, the special tokens are not added to the model by default for the slow tokenizer, while they are for the fast tokenizer. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Closing as #23909 should have fixed it"
] | 1,692 | 1,695 | 1,695 |
NONE
| null |
Recently we are working on a model based on Llama. When I;m trying to convert our `tokenizer.model` to `tokenizers`, a problem occurs.
`tokenizer.model`'s vocab is expanded to fit our model:
```python
from transformers import LlamaTokenizer, LlamaTokenizerFast
from sentencepiece import SentencePieceProcessor
f = LlamaTokenizerFast("tokenizer.model")
s = LlamaTokenizer("tokenizer.model")
origin = SentencePieceProcessor("tokenizer.model")
print(f.bos_token_id, f.eos_token_id, f.unk_token_id) # 1, 2, 0
print(s.bos_token_id, s.eos_token_id, s.unk_token_id) # 0, 1, 2
print(origin.bos_id(), origin.eos_id(), origin.unk_id()) # 0, 1, 2
```
This problem leads to different behaviors between `AutoTokenizer` and `LlamaTokenizer`. What may cause this problem?
transformers==4.31.0
tokenizers==0.13.3
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25611/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25611/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25610
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25610/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25610/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25610/events
|
https://github.com/huggingface/transformers/issues/25610
| 1,857,598,987 |
I_kwDOCUB6oc5uuLYL
| 25,610 |
Model to be adjusted
|
{
"login": "L4EPITTMSU",
"id": 91551643,
"node_id": "U_kgDOBXT3mw",
"avatar_url": "https://avatars.githubusercontent.com/u/91551643?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/L4EPITTMSU",
"html_url": "https://github.com/L4EPITTMSU",
"followers_url": "https://api.github.com/users/L4EPITTMSU/followers",
"following_url": "https://api.github.com/users/L4EPITTMSU/following{/other_user}",
"gists_url": "https://api.github.com/users/L4EPITTMSU/gists{/gist_id}",
"starred_url": "https://api.github.com/users/L4EPITTMSU/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/L4EPITTMSU/subscriptions",
"organizations_url": "https://api.github.com/users/L4EPITTMSU/orgs",
"repos_url": "https://api.github.com/users/L4EPITTMSU/repos",
"events_url": "https://api.github.com/users/L4EPITTMSU/events{/privacy}",
"received_events_url": "https://api.github.com/users/L4EPITTMSU/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"If you're using a recent version of the transformers library, it should eventually be updated to address this issue.\r\nThis warning is indicating that the behavior of floordiv is deprecated and will change in a future version of PyTorch.\r\nimport torch\r\n\r\n# Instead of this\r\nresult = torch.floordiv(a, b)\r\n\r\n# Use this\r\nresult = torch.div(a, b, rounding_mode='floor')",
"It's not an issue but a warning, and since you do not have a reproducer I can't reproduce it. Given the warning, I am not entirely sur there anything to do here, we are already using :\r\n> for actual floor division, use torch.div(a, b, rounding_mode='floor').",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,692 | 1,695 | 1,695 |
NONE
| null |
https://github.com/huggingface/transformers/blob/6b82d936d49956ba7b43c5ee590f4868de373b65/src/transformers/models/big_bird/modeling_big_bird.py#L977
getting this issue:
/opt/software/Python/3.6.4-foss-2018a/lib/python3.6/site-packages/transformers/models/big_bird/modeling_big_bird.py:978: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
* num_indices_to_pick_from
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25610/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25610/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25609
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25609/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25609/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25609/events
|
https://github.com/huggingface/transformers/issues/25609
| 1,857,587,317 |
I_kwDOCUB6oc5uuIh1
| 25,609 |
[Question] padding_side of LlamaTokenizerFast
|
{
"login": "x54-729",
"id": 45304952,
"node_id": "MDQ6VXNlcjQ1MzA0OTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/45304952?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/x54-729",
"html_url": "https://github.com/x54-729",
"followers_url": "https://api.github.com/users/x54-729/followers",
"following_url": "https://api.github.com/users/x54-729/following{/other_user}",
"gists_url": "https://api.github.com/users/x54-729/gists{/gist_id}",
"starred_url": "https://api.github.com/users/x54-729/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/x54-729/subscriptions",
"organizations_url": "https://api.github.com/users/x54-729/orgs",
"repos_url": "https://api.github.com/users/x54-729/repos",
"events_url": "https://api.github.com/users/x54-729/events{/privacy}",
"received_events_url": "https://api.github.com/users/x54-729/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey. We can default to `padding=\"right\"` indeed, not really sure however why/how it can create a different behaviour. Do you have a small reproducer?",
"> Hey. We can default to `padding=\"right\"` indeed, not really sure however why/how it can create a different behaviour. Do you have a small reproducer?\r\n\r\nNot a big problem actually. In some situations I want to test the logits of my model, so `padding_side` may take effect when I want to pad my input. It's not that convenient since I have to set `padding_side` every time.\r\n",
"You can save your tokenizer with `padding_side` and load it from the save folder no? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,692 | 1,695 | 1,695 |
NONE
| null |
https://github.com/huggingface/transformers/blob/e42587f596181396e1c4b63660abf0c736b10dae/src/transformers/models/llama/tokenization_llama_fast.py#L100
Why `padding_side` is left here? I found out that this may cause different behaviours between `LlamaTokenizer.from_pretrained` and `AutoTokenizer.from_pretrained`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25609/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25609/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25608
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25608/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25608/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25608/events
|
https://github.com/huggingface/transformers/pull/25608
| 1,857,372,609 |
PR_kwDOCUB6oc5YSLxX
| 25,608 |
🚨🚨🚨 changing default threshold and applying threshold before the rescale
|
{
"login": "rafaelpadilla",
"id": 31217453,
"node_id": "MDQ6VXNlcjMxMjE3NDUz",
"avatar_url": "https://avatars.githubusercontent.com/u/31217453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rafaelpadilla",
"html_url": "https://github.com/rafaelpadilla",
"followers_url": "https://api.github.com/users/rafaelpadilla/followers",
"following_url": "https://api.github.com/users/rafaelpadilla/following{/other_user}",
"gists_url": "https://api.github.com/users/rafaelpadilla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rafaelpadilla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rafaelpadilla/subscriptions",
"organizations_url": "https://api.github.com/users/rafaelpadilla/orgs",
"repos_url": "https://api.github.com/users/rafaelpadilla/repos",
"events_url": "https://api.github.com/users/rafaelpadilla/events{/privacy}",
"received_events_url": "https://api.github.com/users/rafaelpadilla/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,692 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #25490 by applying the threshold before the rescaling.
Here's a breakdown:
OwlViT model scales confidences scores, so that the max score will always adjust to `1.000`.
To illustrate, let's consider a scenario where the threshold is set at `threshold=0.6` and detected scores (in descending order) are `[0.4259, 0.0707, 0.0441, ...]`. After the rescaling process, they will become `[1.0000, 0.0734, 0.0039, ...]`, and by applying the threshold, the bounding box associated to the first score (initially `0.4259`) is retained. This is not the expected behavior, since it initially had a low score of `0.4259`.
With this behavior, at least one object is left (the one with the highest score), regardless its detection score.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@amyeroberts
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25608/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25608/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25608",
"html_url": "https://github.com/huggingface/transformers/pull/25608",
"diff_url": "https://github.com/huggingface/transformers/pull/25608.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25608.patch",
"merged_at": 1692627606000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25607
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25607/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25607/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25607/events
|
https://github.com/huggingface/transformers/pull/25607
| 1,857,348,175 |
PR_kwDOCUB6oc5YSGbc
| 25,607 |
[Whisper] Fix word-level timestamps for audio < 30 seconds
|
{
"login": "xenova",
"id": 26504141,
"node_id": "MDQ6VXNlcjI2NTA0MTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xenova",
"html_url": "https://github.com/xenova",
"followers_url": "https://api.github.com/users/xenova/followers",
"following_url": "https://api.github.com/users/xenova/following{/other_user}",
"gists_url": "https://api.github.com/users/xenova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xenova/subscriptions",
"organizations_url": "https://api.github.com/users/xenova/orgs",
"repos_url": "https://api.github.com/users/xenova/repos",
"events_url": "https://api.github.com/users/xenova/events{/privacy}",
"received_events_url": "https://api.github.com/users/xenova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Looks like there were some failing unit tests, but they were actually wrong 😅 \r\n\r\n### Original unit test:\r\n```python\r\n{\r\n \"text\": \" Conquered returned to its place amidst the tents.\",\r\n \"chunks\": [\r\n {\"text\": \" Conquered\", \"timestamp\": (29.78, 29.9)},\r\n {\"text\": \" returned\", \"timestamp\": (29.9, 29.9)},\r\n {\"text\": \" to\", \"timestamp\": (29.9, 29.9)},\r\n {\"text\": \" its\", \"timestamp\": (29.9, 29.9)},\r\n {\"text\": \" place\", \"timestamp\": (29.9, 29.9)},\r\n {\"text\": \" amidst\", \"timestamp\": (29.9, 29.9)},\r\n {\"text\": \" the\", \"timestamp\": (29.9, 29.9)},\r\n {\"text\": \" tents.\", \"timestamp\": (29.9, 29.9)}\r\n ]\r\n}\r\n```\r\n\r\n### New (fixed) unit test:\r\n```python\r\n{\r\n \"text\": \" Conquered returned to its place amidst the tents.\",\r\n \"chunks\": [\r\n {\"text\": \" Conquered\", \"timestamp\": (0.5, 1.2)},\r\n {\"text\": \" returned\", \"timestamp\": (1.2, 1.64)},\r\n {\"text\": \" to\", \"timestamp\": (1.64, 1.84)},\r\n {\"text\": \" its\", \"timestamp\": (1.84, 2.02)},\r\n {\"text\": \" place\", \"timestamp\": (2.02, 2.28)},\r\n {\"text\": \" amidst\", \"timestamp\": (2.28, 2.78)},\r\n {\"text\": \" the\", \"timestamp\": (2.78, 2.96)},\r\n {\"text\": \" tents.\", \"timestamp\": (2.96, 3.48)},\r\n ],\r\n},\r\n```\r\n",
"Gently pinging @ArthurZucker for the final 👍 before merge - thank you again for the PR @xenova!",
"Sorry for the miss reviewing now! ",
"Thank you for your contribution @xenova!\r\nIs this PR approved and merged in the latest version? I just installed Transformers and I am still getting the old results:\r\n{'text': ' Okay, you ready?', 'chunks': [{'text': ' Okay,', 'timestamp': (29.98, 29.98)}, {'text': ' you', 'timestamp': (29.98, 29.98)}, {'text': ' ready?', 'timestamp': (29.98, 29.98)}]}\r\n\r\nThanks!",
"Hi there - It's not yet merged, but will hopefully be soon!",
"Do you happen to know when the next version will be out?",
"Hi @xenova,\r\n\r\nI just upgraded to the latest version of Transformers (4.33.1) and tried the following. It seems that the word timestamps are still incorrect.\r\nWhat am I doing wrong?\r\n\r\nThanks!\r\n\r\n###########\r\n```\r\npipe = pipeline(\r\n \"automatic-speech-recognition\",\r\n model=\"openai/whisper-large\",\r\n chunk_length_s=4,\r\n device=device,\r\n)\r\nvx0 = pipe(x0, batch_size=4, return_timestamps=\"word\")\r\n```\r\n\r\nvx0['chunks']:\r\n{'text': ' Okay,', 'timestamp': (29.98, 29.98)},\r\n {'text': ' you', 'timestamp': (29.98, 29.98)},\r\n {'text': ' ready?', 'timestamp': (29.98, 29.98)},\r\n {'text': ' So', 'timestamp': (32.65, 32.65)},\r\n {'text': ' now', 'timestamp': (32.65, 32.65)},\r\n {'text': ' it', 'timestamp': (35.31, 35.31)},\r\n {'text': ' is', 'timestamp': (35.31, 35.31)},\r\n {'text': \" it's\", 'timestamp': (35.31, 35.31)},\r\n {'text': ' a', 'timestamp': (35.31, 35.31)},",
"This is because a full release hasn't come out yet. To fix it, you can install from source (see [docs](https://huggingface.co/docs/transformers/installation#install-from-source)):\r\n```\r\npip install --upgrade git+https://github.com/huggingface/transformers\r\n```",
"Hi guys, I just tried whisper v3 and find that your updated code is gone in the current main branch.\r\nAnd it gives me 29.98... There are multiple commits after this merge, can someone check what is going on?"
] | 1,692 | 1,705 | 1,694 |
CONTRIBUTOR
| null |
# What does this PR do?
In OpenAI's [original implementation for word-level timestamps](https://github.com/openai/whisper/blob/e8622f9afc4eba139bf796c210f5c01081000472/whisper/timing.py#L206), they crop the cross attentions before perform dynamic time warping (to only run the algorithm on valid audio; this prevents getting stuck when backtracking). The current transformers implementation misses this, so this PR fixes that.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
Testing code:
```python
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", "openai/whisper-base")
url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/japanese-audio.wav'
output = pipe(url, return_timestamps="word", chunk_length_s=30, generate_kwargs={'language': 'japanese'})
print(output)
```
Fixed:
```
{
'text': '森長の美味しい牛乳は濃い青いように牛乳ビーンを足らった絶対にのパック牛乳である',
'chunks': [
{'text': '森', 'timestamp': (0.18, 0.64)},
{'text': '長', 'timestamp': (0.64, 0.82)},
{'text': 'の', 'timestamp': (0.82, 1.04)},
{'text': '美味', 'timestamp': (1.04, 1.2)},
{'text': 'しい', 'timestamp': (1.2, 1.46)},
{'text': '牛', 'timestamp': (1.46, 1.68)},
{'text': '乳', 'timestamp': (1.68, 1.92)},
{'text': 'は', 'timestamp': (1.92, 2.14)},
{'text': '濃', 'timestamp': (2.14, 2.32)},
{'text': 'い', 'timestamp': (2.32, 2.44)},
{'text': '青', 'timestamp': (2.44, 2.64)},
{'text': 'い', 'timestamp': (2.64, 2.76)},
{'text': 'ように', 'timestamp': (2.76, 2.92)},
{'text': '牛', 'timestamp': (2.92, 3.16)},
{'text': '乳', 'timestamp': (3.16, 3.36)},
{'text': 'ビ', 'timestamp': (3.36, 3.58)},
{'text': 'ーン', 'timestamp': (3.58, 3.66)},
{'text': 'を', 'timestamp': (3.66, 3.82)},
{'text': '足', 'timestamp': (3.82, 4.0)},
{'text': 'ら', 'timestamp': (4.0, 4.12)},
{'text': 'った', 'timestamp': (4.12, 4.3)},
{'text': '絶', 'timestamp': (4.3, 4.52)},
{'text': '対', 'timestamp': (4.52, 4.68)},
{'text': 'に', 'timestamp': (4.68, 4.78)},
{'text': 'の', 'timestamp': (4.78, 4.94)},
{'text': 'パ', 'timestamp': (4.94, 5.1)},
{'text': 'ック', 'timestamp': (5.1, 5.2)},
{'text': '牛', 'timestamp': (5.2, 5.44)},
{'text': '乳', 'timestamp': (5.44, 5.64)},
{'text': 'で', 'timestamp': (5.64, 5.84)},
{'text': 'ある', 'timestamp': (5.84, 6.04)}
]
}
```
Previous (broken):
```
{
'text': '森長の美味しい牛乳は濃い青いように牛乳ビーンを足らった絶対にのパック牛乳である',
'chunks': [
{'text': '森', 'timestamp': (29.98, 29.98)},
{'text': '長', 'timestamp': (29.98, 29.98)},
{'text': 'の', 'timestamp': (29.98, 29.98)},
{'text': '美味', 'timestamp': (29.98, 29.98)},
{'text': 'しい', 'timestamp': (29.98, 29.98)},
{'text': '牛', 'timestamp': (29.98, 29.98)},
{'text': '乳', 'timestamp': (29.98, 29.98)},
'text': 'は', 'timestamp': (29.98, 29.98)},
{'text': '濃', 'timestamp': (29.98, 29.98)},
{'text': 'い', 'timestamp': (29.98, 29.98)},
{'text': '青', 'timestamp': (29.98, 29.98)},
{'text': 'い', 'timestamp': (29.98, 29.98)},
{'text': 'ように', 'timestamp': (29.98, 29.98)},
{'text': '牛', 'timestamp': (29.98, 29.98)},
{'text': '乳', 'timestamp': (29.98, 29.98)},
{'text': 'ビ', 'timestamp': (29.98, 29.98)},
{'text': 'ーン', 'timestamp': (29.98, 29.98)},
{'text': 'を', 'timestamp': (29.98, 29.98)},
{'text': '足', 'timestamp': (29.98, 29.98)},
{'text': 'ら', 'timestamp': (29.98, 29.98)},
{'text': 'った', 'timestamp': (29.98, 29.98)},
{'text': '絶', 'timestamp': (29.98, 29.98)},
{'text': '対', 'timestamp': (29.98, 29.98)},
{'text': 'に', 'timestamp': (29.98, 29.98)},
{'text': 'の', 'timestamp': (29.98, 29.98)},
{'text': 'パ', 'timestamp': (29.98, 29.98)},
{'text': 'ック', 'timestamp': (29.98, 29.98)},
{'text': '牛', 'timestamp': (29.98, 29.98)},
{'text': '乳', 'timestamp': (29.98, 29.98)},
{'text': 'で', 'timestamp': (29.98, 29.98)},
{'text': 'ある', 'timestamp': (29.98, 29.98)}
]
}
```
<!-- Remove if not applicable -->
Fixes #25605 (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@sanchit-gandhi @ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25607/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25607/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25607",
"html_url": "https://github.com/huggingface/transformers/pull/25607",
"diff_url": "https://github.com/huggingface/transformers/pull/25607.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25607.patch",
"merged_at": 1694709755000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25606
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25606/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25606/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25606/events
|
https://github.com/huggingface/transformers/pull/25606
| 1,857,207,816 |
PR_kwDOCUB6oc5YRnrc
| 25,606 |
Fix test_modeling_mpt typo in model id
|
{
"login": "JuanFKurucz",
"id": 31422367,
"node_id": "MDQ6VXNlcjMxNDIyMzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/31422367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JuanFKurucz",
"html_url": "https://github.com/JuanFKurucz",
"followers_url": "https://api.github.com/users/JuanFKurucz/followers",
"following_url": "https://api.github.com/users/JuanFKurucz/following{/other_user}",
"gists_url": "https://api.github.com/users/JuanFKurucz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JuanFKurucz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JuanFKurucz/subscriptions",
"organizations_url": "https://api.github.com/users/JuanFKurucz/orgs",
"repos_url": "https://api.github.com/users/JuanFKurucz/repos",
"events_url": "https://api.github.com/users/JuanFKurucz/events{/privacy}",
"received_events_url": "https://api.github.com/users/JuanFKurucz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25606). All of your documentation changes will be reflected on that endpoint."
] | 1,692 | 1,704 | 1,692 |
CONTRIBUTOR
| null |
# What does this PR do?
Currently, the method `get_large_model_config` tries to download `mosaicml/mpt-7` model config, which is not found, rest of the file uses `mosaicml/mpt-7b`, which works. This seems like a typo.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25606/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25606/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25606",
"html_url": "https://github.com/huggingface/transformers/pull/25606",
"diff_url": "https://github.com/huggingface/transformers/pull/25606.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25606.patch",
"merged_at": 1692609081000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25605
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25605/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25605/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25605/events
|
https://github.com/huggingface/transformers/issues/25605
| 1,857,187,119 |
I_kwDOCUB6oc5usm0v
| 25,605 |
Incorrect whisper word-level timestamps
|
{
"login": "xenova",
"id": 26504141,
"node_id": "MDQ6VXNlcjI2NTA0MTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xenova",
"html_url": "https://github.com/xenova",
"followers_url": "https://api.github.com/users/xenova/followers",
"following_url": "https://api.github.com/users/xenova/following{/other_user}",
"gists_url": "https://api.github.com/users/xenova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xenova/subscriptions",
"organizations_url": "https://api.github.com/users/xenova/orgs",
"repos_url": "https://api.github.com/users/xenova/repos",
"events_url": "https://api.github.com/users/xenova/events{/privacy}",
"received_events_url": "https://api.github.com/users/xenova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
] |
[] | 1,692 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.32.0.dev0 (main)
- Platform: Linux-5.15.0-1041-azure-x86_64-with-glibc2.31
- Python version: 3.10.8
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Running
```python
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", "openai/whisper-base")
url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/japanese-audio.wav'
output = pipe(url, return_timestamps="word", chunk_length_s=30, generate_kwargs={'language': 'japanese'})
print(output)
```
Produces
```
{'text': '森長の美味しい牛乳は濃い青いように牛乳ビーンを足らった絶対にのパック牛乳である', 'chunks': [{'text': '森', 'timestamp': (29.98, 29.98)}, {'text': '長', 'timestamp': (29.98, 29.98)}, {'text': 'の', 'timestamp': (29.98, 29.98)}, {'text': '美味', 'timestamp': (29.98, 29.98)}, {'text': 'しい', 'timestamp': (29.98, 29.98)}, {'text': '牛', 'timestamp': (29.98, 29.98)}, {'text': '乳', 'timestamp': (29.98, 29.98)}, {'text': 'は', 'timestamp': (29.98, 29.98)}, {'text': '濃', 'timestamp': (29.98, 29.98)}, {'text': 'い', 'timestamp': (29.98, 29.98)}, {'text': '青', 'timestamp': (29.98, 29.98)}, {'text': 'い', 'timestamp': (29.98, 29.98)}, {'text': 'ように', 'timestamp': (29.98, 29.98)}, {'text': '牛', 'timestamp': (29.98, 29.98)}, {'text': '乳', 'timestamp': (29.98, 29.98)}, {'text': 'ビ', 'timestamp': (29.98, 29.98)}, {'text': 'ーン', 'timestamp': (29.98, 29.98)}, {'text': 'を', 'timestamp': (29.98, 29.98)}, {'text': '足', 'timestamp': (29.98, 29.98)}, {'text': 'ら', 'timestamp': (29.98, 29.98)}, {'text': 'った', 'timestamp': (29.98, 29.98)}, {'text': '絶', 'timestamp': (29.98, 29.98)}, {'text': '対', 'timestamp': (29.98, 29.98)}, {'text': 'に', 'timestamp': (29.98, 29.98)}, {'text': 'の', 'timestamp': (29.98, 29.98)}, {'text': 'パ', 'timestamp': (29.98, 29.98)}, {'text': 'ック', 'timestamp': (29.98, 29.98)}, {'text': '牛', 'timestamp': (29.98, 29.98)}, {'text': '乳', 'timestamp': (29.98, 29.98)}, {'text': 'で', 'timestamp': (29.98, 29.98)}, {'text': 'ある', 'timestamp': (29.98, 29.98)}]}
```
### Expected behavior
Should not have all timestamps = 29.98. It should be noted that `29.98 = 30 - 0.02 = chunk_length_s - time_precision`, which I don't think is a coincidence.
Output should better-match the output from openai's whisper library:
```python
import whisper
model = whisper.load_model("base")
url = "https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/japanese-audio.wav"
result = model.transcribe(url, word_timestamps=True)
print(result)
```
```
{'text': '森長の美味しい牛乳は濃い青いように牛乳ビーンを足らった絶対にのパック牛乳である', 'segments': [{'id': 0, 'seek': 0, 'start': 0.28, 'end': 6.06, 'text': '森長の美味しい牛乳は濃い青いように牛乳ビーンを足らった絶対にのパック牛乳である', 'tokens': [50364, 16407, 106, 15353, 2972, 45511, 17121, 40003, 2930, 111, 3065, 26373, 225, 1764, 37462, 1764, 34483, 40003, 2930, 111, 39454, 44632, 5998, 37236, 5154, 10102, 6948, 114, 35252, 4108, 2972, 23268, 28551, 40003, 2930, 111, 2474, 24719, 50776], 'temperature': 0.0, 'avg_logprob': -0.5187992572784423, 'compression_ratio': 1.1142857142857143, 'no_speech_prob': 0.06956221163272858, 'words': [{'word': '森', 'start': 0.28, 'end': 0.64, 'probability': 0.6905971467494965}, {'word': '長', 'start': 0.64, 'end': 0.82, 'probability': 0.25675880908966064}, {'word': 'の', 'start': 0.82, 'end': 1.04, 'probability': 0.9857621192932129}, {'word': '美味', 'start': 1.04, 'end': 1.18, 'probability': 0.44574692845344543}, {'word': 'しい', 'start': 1.18, 'end': 1.48, 'probability': 0.9633633494377136}, {'word': '牛', 'start': 1.48, 'end': 1.68, 'probability': 0.9765644073486328}, {'word': '乳', 'start': 1.68, 'end': 1.94, 'probability': 0.8430313766002655}, {'word': 'は', 'start': 1.94, 'end': 2.12, 'probability': 0.9037646651268005}, {'word': '濃', 'start': 2.12, 'end': 2.34, 'probability': 0.7717265486717224}, {'word': 'い', 'start': 2.34, 'end': 2.46, 'probability': 0.9223815202713013}, {'word': '青', 'start': 2.46, 'end': 2.64, 'probability': 0.7740068435668945}, {'word': 'い', 'start': 2.64, 'end': 2.76, 'probability': 0.861002504825592}, {'word': 'ように', 'start': 2.76, 'end': 2.92, 'probability': 0.10338784009218216}, {'word': '牛', 'start': 2.92, 'end': 3.14, 'probability': 0.8816423416137695}, {'word': '乳', 'start': 3.14, 'end': 3.44, 'probability': 0.9983818531036377}, {'word': 'ビ', 'start': 3.44, 'end': 3.58, 'probability': 0.3014017343521118}, {'word': 'ーン', 'start': 3.58, 'end': 3.7, 'probability': 0.7804636359214783}, {'word': 'を', 'start': 3.7, 'end': 3.8, 'probability': 0.9925546050071716}, {'word': '足', 'start': 3.8, 'end': 3.98, 'probability': 0.26062318682670593}, {'word': 'ら', 'start': 3.98, 'end': 4.1, 'probability': 0.7511312365531921}, {'word': 'った', 'start': 4.1, 'end': 4.3, 'probability': 0.7001805901527405}, {'word': '絶', 'start': 4.3, 'end': 4.52, 'probability': 0.5361797697842121}, {'word': '対', 'start': 4.52, 'end': 4.68, 'probability': 0.27607205510139465}, {'word': 'に', 'start': 4.68, 'end': 4.78, 'probability': 0.40600043535232544}, {'word': 'の', 'start': 4.78, 'end': 4.94, 'probability': 0.8449609875679016}, {'word': 'パ', 'start': 4.94, 'end': 5.1, 'probability': 0.43969523906707764}, {'word': 'ック', 'start': 5.1, 'end': 5.22, 'probability': 0.916713297367096}, {'word': '牛', 'start': 5.22, 'end': 5.42, 'probability': 0.9680136442184448}, {'word': '乳', 'start': 5.42, 'end': 5.7, 'probability': 0.9995707273483276}, {'word': 'で', 'start': 5.7, 'end': 5.84, 'probability': 0.9711904525756836}, {'word': 'ある', 'start': 5.84, 'end': 6.06, 'probability': 0.9810820817947388}]}], 'language': 'ja'}
```
---
Fortunately, the output from the models are identical, so there's no issue with `generate`.
```
# transformers: 森長の美味しい牛乳は濃い青いように牛乳ビーンを足らった絶対にのパック牛乳である
# openai: 森長の美味しい牛乳は濃い青いように牛乳ビーンを足らった絶対にのパック牛乳である
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25605/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25605/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25604
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25604/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25604/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25604/events
|
https://github.com/huggingface/transformers/issues/25604
| 1,857,172,489 |
I_kwDOCUB6oc5usjQJ
| 25,604 |
"Xformers is not installed correctly." error
|
{
"login": "engageintellect",
"id": 61082194,
"node_id": "MDQ6VXNlcjYxMDgyMTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/61082194?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/engageintellect",
"html_url": "https://github.com/engageintellect",
"followers_url": "https://api.github.com/users/engageintellect/followers",
"following_url": "https://api.github.com/users/engageintellect/following{/other_user}",
"gists_url": "https://api.github.com/users/engageintellect/gists{/gist_id}",
"starred_url": "https://api.github.com/users/engageintellect/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/engageintellect/subscriptions",
"organizations_url": "https://api.github.com/users/engageintellect/orgs",
"repos_url": "https://api.github.com/users/engageintellect/repos",
"events_url": "https://api.github.com/users/engageintellect/events{/privacy}",
"received_events_url": "https://api.github.com/users/engageintellect/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This is a duplicate of #24903"
] | 1,692 | 1,692 | 1,692 |
NONE
| null |
### System Info
System info:
* Transformers 4.31.0
* Python 3.11.3
Error:
```
Xformers is not installed correctly. If you want to use memory_efficient_attention to accelerate training use the following command to install Xformers
pip install xformers.
```
Code:
```
import argparse
from transformers import pipeline
# Create the parser
parser = argparse.ArgumentParser(description="Perform sentiment analysis")
# Add an argument
parser.add_argument('Text', type=str, help="the text to analyze")
# Parse the argument
args = parser.parse_args()
# Load the classifier
classifier = pipeline("sentiment-analysis", model="distilbert-base-uncased-finetuned-sst-2-english")
# Perform sentiment analysis
res = classifier(args.Text)
# Print the result
print(res)
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce this behavior:
1. run my code
2. receive error
### Expected behavior
no error
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25604/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25604/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25603
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25603/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25603/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25603/events
|
https://github.com/huggingface/transformers/pull/25603
| 1,857,170,981 |
PR_kwDOCUB6oc5YRfkv
| 25,603 |
Safety check low_cpu_mem_usage when in 4bit or 8bit
|
{
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25603). All of your documentation changes will be reflected on that endpoint.",
"cc @younesbelkada ",
"Running `make style` should fix the CI 😉 ",
"The history of the PR is a bit messed up 😓 you are gonna need to rebase / force push to make it work",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,692 | 1,695 | 1,695 |
COLLABORATOR
| null |
Currently, the check is only done for `is None` after which `low_cpu_mem_usage ` is set to True. But that can lead to unexpected behavior, especially if it is set to False. The result would be a hard-to-debug error trace (cf. related issue). This PR makes sure that when in 4bit or 8bit, `low_cpu_mem_usage` is set to True regardless of its current value and it will log a warning to the user to notify them of the change.
closes https://github.com/huggingface/accelerate/issues/1858
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25603/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25603/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25603",
"html_url": "https://github.com/huggingface/transformers/pull/25603",
"diff_url": "https://github.com/huggingface/transformers/pull/25603.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25603.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25602
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25602/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25602/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25602/events
|
https://github.com/huggingface/transformers/issues/25602
| 1,857,132,675 |
I_kwDOCUB6oc5usZiD
| 25,602 |
llama fast tokenizer: FileNotFound error when saving model checkpoint and self.vocab_file does not exist
|
{
"login": "ZhangShiyue",
"id": 11383558,
"node_id": "MDQ6VXNlcjExMzgzNTU4",
"avatar_url": "https://avatars.githubusercontent.com/u/11383558?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhangShiyue",
"html_url": "https://github.com/ZhangShiyue",
"followers_url": "https://api.github.com/users/ZhangShiyue/followers",
"following_url": "https://api.github.com/users/ZhangShiyue/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhangShiyue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhangShiyue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhangShiyue/subscriptions",
"organizations_url": "https://api.github.com/users/ZhangShiyue/orgs",
"repos_url": "https://api.github.com/users/ZhangShiyue/repos",
"events_url": "https://api.github.com/users/ZhangShiyue/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhangShiyue/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Sure, the problem is that in fast we cannot recover the content of `vocab_file` if the repo was deleted. We can produce a warning however, mentioning that you won't be able to initialize a slow tokenizer. Opening a PR to fix this! Thanks for reporting",
"Thanks a lot! Does it mean a fast tokenizer can still be initialized if vocab_file does not exist?",
"It depends, if you have a `tokenizer.json` file then yes, if not, then you cannot convert the slow tokenizer if the `vocab_file` (which in this case is the sentencepiece model) was deleted no? "
] | 1,692 | 1,693 | 1,693 |
NONE
| null |
### System Info
transformers==4.31.0
torch==2.0.1
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Traceback (most recent call last):
....
File "python3.9/site-packages/transformers/trainer.py", line 1539, in train
return inner_training_loop(
File "python3.9/site-packages/transformers/trainer.py", line 1916, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "python3.9/site-packages/transformers/trainer.py", line 2237, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "python3.9/site-packages/transformers/trainer.py", line 2294, in _save_checkpoint
self.save_model(output_dir, _internal_call=True)
File "python3.9/site-packages/transformers/trainer.py", line 2749, in save_model
self._save(output_dir, state_dict=state_dict)
File "python3.9/site-packages/transformers/trainer.py", line 2832, in _save
self.tokenizer.save_pretrained(output_dir)
File "python3.9/site-packages/transformers/tokenization_utils_base.py", line 2221, in save_pretrained
save_files = self._save_pretrained(
File "python3.9/site-packages/transformers/tokenization_utils_fast.py", line 595, in _save_pretrained
vocab_files = self.save_vocabulary(save_directory, filename_prefix=filename_prefix)
File "python3.9/site-packages/transformers/models/llama/tokenization_llama_fast.py", line 186, in save_vocabulary
copyfile(self.vocab_file, out_vocab_file)
File "/opt/bb/lib/python3.9/shutil.py", line 264, in copyfile
with open(src, 'rb') as fsrc:
FileNotFoundError: [Errno 2] No such file or directory: './model/tokenizer.model'
### Expected behavior
When I finetune llama, it threw out this error when saving the first checkpoint because the original model directory was deleted.
I noticed that in https://github.com/huggingface/transformers/blob/ef1534252f76231b4a6403c71866d4376e35292d/src/transformers/models/llama/tokenization_llama.py#L281, it checks whether self.vocab_file exists.
Could this be added to tokenization_llama_fast.py too?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25602/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25602/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25601
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25601/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25601/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25601/events
|
https://github.com/huggingface/transformers/pull/25601
| 1,857,126,023 |
PR_kwDOCUB6oc5YRVrW
| 25,601 |
nabarup add comments in HammingDiversityLogitsProcessor inside logit_process. Issue #24783
|
{
"login": "Nabarup-Maity",
"id": 45371293,
"node_id": "MDQ6VXNlcjQ1MzcxMjkz",
"avatar_url": "https://avatars.githubusercontent.com/u/45371293?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nabarup-Maity",
"html_url": "https://github.com/Nabarup-Maity",
"followers_url": "https://api.github.com/users/Nabarup-Maity/followers",
"following_url": "https://api.github.com/users/Nabarup-Maity/following{/other_user}",
"gists_url": "https://api.github.com/users/Nabarup-Maity/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nabarup-Maity/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nabarup-Maity/subscriptions",
"organizations_url": "https://api.github.com/users/Nabarup-Maity/orgs",
"repos_url": "https://api.github.com/users/Nabarup-Maity/repos",
"events_url": "https://api.github.com/users/Nabarup-Maity/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nabarup-Maity/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey @Nabarup-Maity -- This logits processor was already claimed (and being worked on) by another contributor, as it can be seen in the [main issue](https://github.com/huggingface/transformers/issues/24783). As such, I will not be accepting this PR :)"
] | 1,692 | 1,694 | 1,694 |
NONE
| null |
# What does this PR do?
add comments in functions inside logit process under model.generate.
@gante kindly check. This is my first ever contribution. Hope it will help.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
#24783
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25601/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25601/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25601",
"html_url": "https://github.com/huggingface/transformers/pull/25601",
"diff_url": "https://github.com/huggingface/transformers/pull/25601.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25601.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25600
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25600/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25600/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25600/events
|
https://github.com/huggingface/transformers/pull/25600
| 1,857,118,492 |
PR_kwDOCUB6oc5YRUC2
| 25,600 |
fixing bad docstring example
|
{
"login": "lucas-spangher-1",
"id": 128749272,
"node_id": "U_kgDOB6yO2A",
"avatar_url": "https://avatars.githubusercontent.com/u/128749272?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucas-spangher-1",
"html_url": "https://github.com/lucas-spangher-1",
"followers_url": "https://api.github.com/users/lucas-spangher-1/followers",
"following_url": "https://api.github.com/users/lucas-spangher-1/following{/other_user}",
"gists_url": "https://api.github.com/users/lucas-spangher-1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucas-spangher-1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucas-spangher-1/subscriptions",
"organizations_url": "https://api.github.com/users/lucas-spangher-1/orgs",
"repos_url": "https://api.github.com/users/lucas-spangher-1/repos",
"events_url": "https://api.github.com/users/lucas-spangher-1/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucas-spangher-1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25600). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,692 | 1,695 | 1,695 |
NONE
| null |
# What does this PR do?
Removed input arguments from the docstring example of AutoformerForPrediction.forward() in order to get it working out of the box.
Detailed in this issue: https://github.com/huggingface/blog/issues/1382
Fixes # (issue)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25600/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25600/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25600",
"html_url": "https://github.com/huggingface/transformers/pull/25600",
"diff_url": "https://github.com/huggingface/transformers/pull/25600.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25600.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25599
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25599/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25599/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25599/events
|
https://github.com/huggingface/transformers/pull/25599
| 1,857,078,096 |
PR_kwDOCUB6oc5YRLMX
| 25,599 |
🚨🚨🚨 [`Refactor`] Move third-party related utility files into `integrations/` folder 🚨🚨🚨
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi Younes,\r\n\r\nUnless I'm mistaken these would be BC breaking changes.\r\n\r\n```\r\n$ PYTHONPATH=src python -c \"from transformers.deepspeed import HfDeepSpeedConfig\"\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\nModuleNotFoundError: No module named 'transformers.deepspeed'\r\n```\r\n\r\nChanging the doc to use the new API won't unbreak users' code.\r\n\r\n(In general for deepspeed integration issues please tag @pacman100 who is the current maintainer of the integration.)",
"unrelated - perhaps there could be a neater name than `lib_integrations` - this is awkward IMHO.\r\n\r\nPerhaps just `integrations` or `integration` or `external` - in other words one word would be smoother. it's obvious that those are `libs`. all python packages are.\r\n\r\n```\r\n- from transformers.lib_integrations.deepspeed import HfDeepSpeedConfig\r\n+ from transformers.integration.deepspeed import HfDeepSpeedConfig\r\n```",
"Yes I had mentioned integrations as well as a name for the folder. Note that we usually do not guarantee backward compatibility with imports not at the init level (anything not imported in the main init or a subfolder init is considered private), but we can keep a deepspeed module that reimports `HfDeepSpeedConfig` if this line is in a lot of places.",
"I did not used `integrations` as there is already a module named `integrations.py` in the same place. Maybe renaming `lib_integrations` to `external` would be better, WDYT? ",
"Use a single `integration`? Since once you add the specifics it's a clear `transformers.integration.what`\r\n\r\nas in `transformers.integration.deepspeed` aka transformers' integration of deepspeed",
"As long as the objects are re-imported, it's completely fine (and non-breaking) to change `integrations.py` to an `integrations` folder.",
"Ah yes makes sense, modified it accordingly",
"Actually sorry, forgot to do it for peft, will do it now",
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger the PR is now ready again ! 🙏 \r\nThe failing CI is related with the Hub rate limit issue we have been discussing internally (cc @ydshieh), would like to have a review if possible and I can put the PR on hold until the issue gets solved ! ",
"Thank you all! Feedback should be addressed, @pacman100 I am happy to propose a patch in accelerate after this gets merged, regarding the docs, please see: https://github.com/younesbelkada/transformers/blob/move-integrations/docs/source/en/main_classes/deepspeed.md which has been adapted with the refactor, let me know if there are some other place where the doc should be updated",
"@younesbelkada as said before, we will need to keep the current DeepSpeed module to import back objects for backward compatibility (liek `file_utils.py` does), it's too breaking otherwise.",
"Ah makes sense, reverted that and added a comment in the file",
"Thanks everyone for your reviews, regarding the large diff on the doctest file my intuition is that it got large because I added a new line that changed the alphabetical order of the file but I am not sure. (maybe @ydshieh can confirm)\r\nI also realised we might need to keep `utils/bitsandbytes.py` otherwise it would be too breaking, so I reverted it back and did a similar approach than `deepspeed.py` ",
"The large diff is fixed with https://github.com/huggingface/transformers/pull/25680 thanks to @ydshieh ",
"@sgugger , ran the daily CI (with a reduced number of models tests) thanks to @ydshieh and you can see the results here: https://github.com/huggingface/transformers/actions/runs/5965296109 \r\ncompared the report with the report of the same day and I don't see any surprising difference / test failure caused by this PR. Therefore I think we can merge it ! Can you please have a final look ? 🙏 Thanks!"
] | 1,692 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
# What does this PR do?
As per title and to address https://github.com/huggingface/transformers/pull/25077#discussion_r1283042392
Let's move all third party libs (outside HF ecosystem) related utility files inside `lib_integrations/`, currently to the best of my knowledge bitsandbytes and deepspeed are the only 2 third party libs that we use as a plugin integration and is outside HF ecosystem
cc @sgugger and @stas00 as it touches DS related code
Let me see first if the CI passes
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25599/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25599/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25599",
"html_url": "https://github.com/huggingface/transformers/pull/25599",
"diff_url": "https://github.com/huggingface/transformers/pull/25599.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25599.patch",
"merged_at": 1692976415000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25598
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25598/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25598/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25598/events
|
https://github.com/huggingface/transformers/pull/25598
| 1,857,032,190 |
PR_kwDOCUB6oc5YRBKh
| 25,598 |
[`core` ] Integrate Flash attention 2 in most used models
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I think attempting to plug Hazy-flash in transformers and properly benchmarking against vanilla transformers and against BetterTransformer is a **very good thing** to possibly motivate:\r\n* Upstreaming SDPA or Hazy-flash natively in transformers.\r\n* a refactorization of the KV cache to rely on indexing rather than padding, or make the KV cache implementation more modular (against transformers philosophy though).\r\n\r\nHowever, there should in my opinion be a serious internal discussion about what goes natively in transformers and whatnot when it comes to hardware optimization. Hazy-flash relies heavily on CUTLASS, that can not be transpiled for AMD devices.\r\n\r\nThere is [this fork](https://github.com/ROCmSoftwarePlatform/flash-attention) of flash for RoCm. If we are to integrate such optimizations in transformers natively, would we be comfortable doing so for a variety of hardware providers?\r\n\r\nIn my opinion, an approach a la BetterTransformer replacing modules or methods can make sense as well. The issue with the code injection approach is that it can make it more difficult to combine several injections (for example BT + bitsandbytes - though I hear they currently work smoothly together).\r\n\r\nAnyway, very keen to help to benchmark!",
"_The documentation is not available anymore as the PR was closed or merged._",
"🚀 Like the new design a lot better",
"Current state for batched generation (with padding, which is the common case for batched generation):\r\n\r\n\r\n\r\n\r\n\r\nIMO it is unrealistic to use FA for batched generation without using the new https://github.com/Dao-AILab/flash-attention/blob/37c6e0540650658e466e42b4d2e15925b8cfbe24/flash_attn/flash_attn_interface.py#L796. I would say let's just disable FA2 when using batched generation.\r\n\r\nCurrent state for training (or use_cache=False) with padding: The current implementation unpad/pad many times which is not necessary and could be done only once at the beginning/end. This adds some overhead as well, I'll profile.",
"Just ran one training, the bettertransformer implementation took 9.5 hours per epoch with llama v2\r\nThis PR takes 6 hours per epoch\r\nBatch size 1, 4 bit bnb, 7b model, 4096 seq length, 4.5% trainable params using peft and large lora depth\r\nRTX A6000",
"Nice speedup then with respect to BT ! That's great to hear @flozi00 , we will run final benchmarks, try to see if we can further optimize this PR and mark it as ready. Out of curiosity, do you use any padding token in the training data ?",
"As BT does not support training with padding tokens I assume you ran your experiment without padding tokens (i.e. `packing=True` in case you are using `SFTTrainer` from TRL library",
"> \r\n\r\nYeah, no padding.\r\nI used an similiar function to the packing function of trl library",
"Awesome, thanks for confirming @flozi00 ",
"@flozi00 Were you using torch 2.0.1? I wonder if the difference comes just from flash v2 (this PR) vs flash v1 (torch 2.0.1), or if there is more to it (some overlead in the BT + SDPA path).",
"Yes, the latest pytorch release (2.0.1), cuda 11.8\r\noptimum and transformers from the latest release",
"Update: with the latest commit 7f06af6226ad6e50164c59ce7a7201d12cd3acfe we avoid some unnecessary operations. However we do still unpad/pad at each layer which remains expensive.\r\n",
"> Update: with the latest commit [7f06af6](https://github.com/huggingface/transformers/commit/7f06af6226ad6e50164c59ce7a7201d12cd3acfe) we avoid some unnecessary operations. However we do still unpad/pad at each layer which remains expensive.\r\n\r\nWould you mind making an issue on PyTorch for this so that we can track enablement?",
"> As BT does not support training with padding tokens I assume you ran your experiment without padding tokens (i.e. `packing=True` in case you are using `SFTTrainer` from TRL library\r\n\r\nCould you also make an issue for this so that we can track enablement:\r\nhttps://github.com/pytorch/pytorch/pull/97485 fwiw\r\n\r\n\r\ncc @jbschlosser",
"Thank you @drisspg! Opened https://github.com/pytorch/pytorch/issues/108770. For the second point, it is indeed the PR you linked + BT not implementing a path using nested tensors.",
"I think the PR is in a nice state, requesting some reviews before adding the documentation! \r\n\r\nSpeedups I get for falcon-7b model using this PR with pure forward! \r\n\r\ncc @ArthurZucker @patrickvonplaten @sgugger @pacman100 \r\n\r\n",
"FYI out of curiosity I gave a try to remove the unpad/pad overhead when `use_cache=False` (e.g. padded training), this overhead is actually rather large: https://github.com/younesbelkada/transformers/pull/5",
"Thanks all for your comments and review, the PR is ready for a final review!",
"This PR broke my custom attention module in AutoAWQ because of the introduction of a new `padding_mask` argument. This should be in the release notes if the intention is to keep the `padding_mask` as an input argument to the attention module - for better visibility. Fixed by accepting `*args, **kwargs` in custom modules and leaving implementation of Flash attention for later. cc @younesbelkada "
] | 1,692 | 1,697 | 1,695 |
CONTRIBUTOR
| null |
# What does this PR do?
Based on flash attention official repository and code snippets shared internally by @pacman100 & @fxmarty and also based on some internal discussion with @LysandreJik, made a PoC of what a Flash Attention 2 integration would look like in transformers.
We should restrict the integration to Flash Attention 2 only for now, as there is a way to run Flash-Attention 1 through torch.SDPA + `BetterTransformer` API that is explained here: https://github.com/huggingface/transformers/pull/25265
I added it only for Llama for now but could be easily extended for other architectures (though I think Alibi is not supported but not sure).
Note that the performance with this integration will not be optimal in this case as flash attention shines in a batched setting when the KV caching is done in a specific format.
Draft for now.
## API
Currently the API is very simple:
<details><summary>API example</summary>
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaForCausalLM
model_id = "meta-llama/Llama-2-7b-hf"
tokenizer = AutoTokenizer.from_pretrained(model_id, use_auth_token=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
use_auth_token=True,
low_cpu_mem_usage=True,
use_flash_attention_2=True,
).to(0)
text = [
"Hello how are you?",
"hi"
]
tokenizer.pad_token = tokenizer.eos_token
inputs = tokenizer(text, return_tensors="pt", padding=True).to(0)
outputs = model.generate(inputs["input_ids"], attention_mask=inputs["attention_mask"], use_cache=False, max_new_tokens=30, do_sample=False)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
## TODOs:
- [x] Benchmarks
- [x] Add padded input support
- [x] Stronger UX: block users if no GPU available, check if no device_map --> warn users to set the model on GPU / warn if no half precision, etc.
- [x] Tests
- [ ] Docs
- [x] add it for falcon
- [ ] add it for starcoder
cc @fxmarty @pacman100 @LysandreJik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25598/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 4
}
|
https://api.github.com/repos/huggingface/transformers/issues/25598/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25598",
"html_url": "https://github.com/huggingface/transformers/pull/25598",
"diff_url": "https://github.com/huggingface/transformers/pull/25598.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25598.patch",
"merged_at": 1695397331000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25597
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25597/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25597/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25597/events
|
https://github.com/huggingface/transformers/issues/25597
| 1,857,006,466 |
I_kwDOCUB6oc5ur6uC
| 25,597 |
25237 needs a follow up work
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] |
[
"resolved in https://github.com/huggingface/transformers/pull/24796/commits/2e37e6c8b31a9e03f4298f34bc66dedbd7cb997c"
] | 1,692 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
Looks like the modified `check_config_can_be_init_without_params` in https://github.com/huggingface/transformers/pull/25237 broke the yet unmerged https://github.com/huggingface/transformers/pull/24796
I patched it to skip these tests here so that we could merge it here https://github.com/huggingface/transformers/pull/24796/commits/906485b364e663221199a635a88a8f8814be988c
but most likely this needs to be fixed yet.
FWIW, this part doesn't fail:
```
def check_config_can_be_init_without_params(self):
config = self.config_class()
self.parent.assertIsNotNone(config)
```
The updated test in this PR expects the config class to fail if it's a composition, but in the case of Idefics it succeeds (not sure why it should fail).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25597/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25597/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25596
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25596/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25596/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25596/events
|
https://github.com/huggingface/transformers/pull/25596
| 1,856,978,272 |
PR_kwDOCUB6oc5YQ1g6
| 25,596 |
reattach hooks when using `resize_token_embeddings`
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,692 | 1,692 | 1,692 |
MEMBER
| null |
# What does this PR do ?
Solves #25554.
This PR fix the case where one uses `resize_token_embeddings` on a model with that has been dispatched with `device_map`. We reattach the hook to the new modules.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25596/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25596/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25596",
"html_url": "https://github.com/huggingface/transformers/pull/25596",
"diff_url": "https://github.com/huggingface/transformers/pull/25596.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25596.patch",
"merged_at": 1692394230000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25595
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25595/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25595/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25595/events
|
https://github.com/huggingface/transformers/pull/25595
| 1,856,949,219 |
PR_kwDOCUB6oc5YQvVy
| 25,595 |
Make TTS automodels importable
|
{
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,692 | 1,692 | 1,692 |
MEMBER
| null |
The current auto models for TTS cannot be imported from top level
```
from transformers import AutoModelForTextToWaveform
```
fails
Follow up from #24952
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25595/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25595/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25595",
"html_url": "https://github.com/huggingface/transformers/pull/25595",
"diff_url": "https://github.com/huggingface/transformers/pull/25595.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25595.patch",
"merged_at": 1692388896000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25594
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25594/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25594/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25594/events
|
https://github.com/huggingface/transformers/pull/25594
| 1,856,858,139 |
PR_kwDOCUB6oc5YQbu2
| 25,594 |
Fix ExponentialDecayLengthPenalty negative logits issue
|
{
"login": "pokjay",
"id": 31060527,
"node_id": "MDQ6VXNlcjMxMDYwNTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/31060527?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pokjay",
"html_url": "https://github.com/pokjay",
"followers_url": "https://api.github.com/users/pokjay/followers",
"following_url": "https://api.github.com/users/pokjay/following{/other_user}",
"gists_url": "https://api.github.com/users/pokjay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pokjay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pokjay/subscriptions",
"organizations_url": "https://api.github.com/users/pokjay/orgs",
"repos_url": "https://api.github.com/users/pokjay/repos",
"events_url": "https://api.github.com/users/pokjay/events{/privacy}",
"received_events_url": "https://api.github.com/users/pokjay/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Thanks @gante and @ArthurZucker !\r\n\r\nCould you please approve the documentation workflow? I’d like to verify the documentation looks as expected",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25594). All of your documentation changes will be reflected on that endpoint.",
"@gante Do I need to do anything more to get this merged?",
"You could take into account the changes I suggested 😉 code related variable name are always put aroung codeblocks in our doc! ",
"@ArthurZucker oops, totally missed those! Committed those fixes now, Thanks!",
"@pokjay since we last spoke, we've added this file to the list of files to be doctested in our PR CI -- it seems like your example's outputs don't match the hardcoded outputs. \r\n\r\nWould you be able to double-check that? :)\r\n\r\n(as soon as this gets fixed, we can merge)",
"@gante Fixed the issues, all checks pass!",
"@pokjay thank you for iterating 💛 "
] | 1,692 | 1,702 | 1,694 |
CONTRIBUTOR
| null |
# What does this PR do?
In cases where the model logits are negative, ExponentialDecayLengthPenalty decreases the score of eos_token_id instead of increasing it.
To fix this issue we compute the penalty of the absolute value and add it to the original score, as described in #25416
The test was updated to check for negative logits.
In addition, this PR updates the class documentation and adds examples, as part of #24783
Fixes #25416
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25594/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25594/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25594",
"html_url": "https://github.com/huggingface/transformers/pull/25594",
"diff_url": "https://github.com/huggingface/transformers/pull/25594.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25594.patch",
"merged_at": 1694519442000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25593
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25593/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25593/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25593/events
|
https://github.com/huggingface/transformers/issues/25593
| 1,856,807,227 |
I_kwDOCUB6oc5urKE7
| 25,593 |
training with IterableDataset is very slow when using a large number of workers
|
{
"login": "hjq133",
"id": 37239015,
"node_id": "MDQ6VXNlcjM3MjM5MDE1",
"avatar_url": "https://avatars.githubusercontent.com/u/37239015?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hjq133",
"html_url": "https://github.com/hjq133",
"followers_url": "https://api.github.com/users/hjq133/followers",
"following_url": "https://api.github.com/users/hjq133/following{/other_user}",
"gists_url": "https://api.github.com/users/hjq133/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hjq133/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hjq133/subscriptions",
"organizations_url": "https://api.github.com/users/hjq133/orgs",
"repos_url": "https://api.github.com/users/hjq133/repos",
"events_url": "https://api.github.com/users/hjq133/events{/privacy}",
"received_events_url": "https://api.github.com/users/hjq133/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey, this kind of questions should be asked on [the forum ](https://discuss.huggingface.co/), as per the contribution guidelines."
] | 1,692 | 1,692 | 1,692 |
NONE
| null |
My problem is:
I have 512 JSONL files (each 1GB) and stored on the Amazon Cloud. I have 32 GPUs. I read this big dataset with IterableDataSet recommended by Huggingface so that I can train on the fly.
`
dataset = load_dataset("json", data_files=data_files, storage_options=storage_options, streaming=True)
`
But when doing training I found that the speed was very slow, especially when I use many GPU workers .
I found that the reason was because HuggingFace Trainer used DispatchDataloader to read Iterable DataSet.
It is the same with https://github.com/huggingFace/accelerate/issues/158.
Is there a good solution for my problem?
One of the solutions I think is to divide the 512 JSONL files to 32 GPU workers when building the dataset. So each GPU Worker only accesses the corresponding 16 JSONL file. it seems that I should shard the dataset manually in my code according to gpu worker id, and I should not pass the dataloader to accelerator.prepare().
Can HuggingFace Trainer currently support this way of data loading? Or is there any other way to deal with my problem.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25593/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25593/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25592
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25592/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25592/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25592/events
|
https://github.com/huggingface/transformers/issues/25592
| 1,856,768,871 |
I_kwDOCUB6oc5urAtn
| 25,592 |
Augmentation error Expected y_max for bbox evoked in the trainer.train() to finetune object detection detr model
|
{
"login": "ironllamagirl",
"id": 29931299,
"node_id": "MDQ6VXNlcjI5OTMxMjk5",
"avatar_url": "https://avatars.githubusercontent.com/u/29931299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ironllamagirl",
"html_url": "https://github.com/ironllamagirl",
"followers_url": "https://api.github.com/users/ironllamagirl/followers",
"following_url": "https://api.github.com/users/ironllamagirl/following{/other_user}",
"gists_url": "https://api.github.com/users/ironllamagirl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ironllamagirl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ironllamagirl/subscriptions",
"organizations_url": "https://api.github.com/users/ironllamagirl/orgs",
"repos_url": "https://api.github.com/users/ironllamagirl/repos",
"events_url": "https://api.github.com/users/ironllamagirl/events{/privacy}",
"received_events_url": "https://api.github.com/users/ironllamagirl/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @ironllamagirl, thanks for raising this issue! \r\n\r\nThis is really a question for the forums. We try to reserve github issues for bug reports and feature requests. However, we can give some pointers. \r\n\r\nIn the linked example, the augmentations are applied to dataset samples whenever a new batch is loaded. During the training loop of the trainer, the dataset is iterated over and the loaded samples passed to the model. However, the trainer is not itself performing the augmentations. \r\n\r\nThe augmentations are applied to the dataset using the `with_transform` method in [datasets](https://huggingface.co/docs/datasets/v2.14.4/en/package_reference/main_classes#datasets.Dataset.with_transform). You can see where this is applied to the dataset and the resulting augmented sampled in the example starting `cppe5[\"train\"] = cppe5[\"train\"].with_transform(transform_aug_ann)`. \r\n\r\nDisabling augmentations is just a case of not applying them in the definition of `transform`. Note: the `resize` operation is still needed in order to batch the images e.g.\r\n\r\n```\r\ntransform = albumentations.Compose(\r\n [albumentations.Resize(480, 480)],\r\n bbox_params=albumentations.BboxParams(format=\"coco\", label_fields=[\"category\"]),\r\n)\r\n```\r\n\r\nThe logic of clipping the bounding boxes should probably go inside `formatted_anns`\r\n\r\n\r\ncc @rafaelpadilla \r\n",
"Hi @ironllamagirl,\r\n\r\nAlso note that in the provided example, some images have been omitted from the training set (`remove_idx = [590, 821, 822, 875, 876, 878, 879]`). This is due to the fact that certain bounding boxes associated with these images were entirely outside the image boundaries.\r\n\r\nWhen faced with instances where bounding boxes (before augmentation) are positioned completely outside the image frame, it's better to exclude these boxes from your training dataset rather than just clipping them.\r\n",
"Thank you so much for your feedback. \r\n\r\nYes removing those fixed the issue. Thanks! "
] | 1,692 | 1,692 | 1,692 |
NONE
| null |
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.15.0-1039-oracle-x86_64-with-glibc2.35
- Python version: 3.11.4
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
and albumentation version: 1.3.1
### Who can help?
@amyeroberts
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am following [this tutorial ](https://huggingface.co/docs/transformers/tasks/object_detection) to finetune a detr on my custom dataset.
the problem arises when I call trainer.train(). This is the error I'm getting
`ValueError: Expected y_max for bbox (0.4921875, 0.9018518518518519, 0.5375, 1.026851851851852, 3) to be in the range [0.0, 1.0], got 1.026851851851852.`
I saw that this is an issue related to albumentations [here](https://github.com/albumentations-team/albumentations/issues/459) And I'm looking for a workaround where I do not have to modify the package (albumentations package) source code.
What would be the data augmentations that I can perform that would make sure I do run through this error whatever the data I use? Is there a way to disable the augmentations?
I also wanted to implement the logic of 'clipping' the bounding box coordinates to either 0 or 1 if they are outside the range but it seems like the augmentations are done within the trainer.train()? . I'm concluding this since when I print the bbox coordinates before running .train() I do not have values outside of the {0,1] range.
Thank you for your help!
### Expected behavior
running trainer.train() would finetune the model on augmented data
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25592/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25592/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25591
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25591/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25591/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25591/events
|
https://github.com/huggingface/transformers/issues/25591
| 1,856,762,121 |
I_kwDOCUB6oc5uq_EJ
| 25,591 |
Unable to import tokenizers
|
{
"login": "jeffyjeff2893",
"id": 129834917,
"node_id": "U_kgDOB70fpQ",
"avatar_url": "https://avatars.githubusercontent.com/u/129834917?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jeffyjeff2893",
"html_url": "https://github.com/jeffyjeff2893",
"followers_url": "https://api.github.com/users/jeffyjeff2893/followers",
"following_url": "https://api.github.com/users/jeffyjeff2893/following{/other_user}",
"gists_url": "https://api.github.com/users/jeffyjeff2893/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jeffyjeff2893/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeffyjeff2893/subscriptions",
"organizations_url": "https://api.github.com/users/jeffyjeff2893/orgs",
"repos_url": "https://api.github.com/users/jeffyjeff2893/repos",
"events_url": "https://api.github.com/users/jeffyjeff2893/events{/privacy}",
"received_events_url": "https://api.github.com/users/jeffyjeff2893/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey, you manually changed the path using `sys`, if you move your call to `transformers` above your code, it works as expected. ",
"After changing the python version I have to append the install location to sys path though or I get no module named transformers"
] | 1,692 | 1,692 | 1,692 |
NONE
| null |
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.15.109+-x86_64-with-glibc2.35
- Python version: 3.9.17
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When using python 3.9 on colab I'm unable to import anything from transformers
https://colab.research.google.com/drive/1KYBHdjLphk0L7ZFOPAaw6Xr-v5TuL55u?usp=sharing
### Expected behavior
I expect to be able to import AutoTokenizer
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25591/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25591/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25590
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25590/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25590/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25590/events
|
https://github.com/huggingface/transformers/pull/25590
| 1,856,749,833 |
PR_kwDOCUB6oc5YQEJP
| 25,590 |
feat: add trainer label to wandb run upon initialization
|
{
"login": "parambharat",
"id": 12809212,
"node_id": "MDQ6VXNlcjEyODA5MjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/12809212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/parambharat",
"html_url": "https://github.com/parambharat",
"followers_url": "https://api.github.com/users/parambharat/followers",
"following_url": "https://api.github.com/users/parambharat/following{/other_user}",
"gists_url": "https://api.github.com/users/parambharat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/parambharat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/parambharat/subscriptions",
"organizations_url": "https://api.github.com/users/parambharat/orgs",
"repos_url": "https://api.github.com/users/parambharat/repos",
"events_url": "https://api.github.com/users/parambharat/events{/privacy}",
"received_events_url": "https://api.github.com/users/parambharat/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25590). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Thanks, @ArthurZucker \r\n\r\n@muellerzr , could you please take a look and let me know if this PR is good to merge or is anything is needed from my end here.\r\n",
"Hey @parambharat the PR looks good but it seems GitHub didn't appreciate your merge/rebase, unfortunately (it happens sometimes!), and it shows 311 commits.\r\n\r\nDo you mind just closing this PR and opening one from the same branch once again? Ping me on it and I'll be happy to take another look. Thanks!"
] | 1,692 | 1,695 | 1,695 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds a telemetry label to the wandb run making it possible to identify W&B usage from the trainer class.
## Before submitting
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
## Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25590/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25590/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25590",
"html_url": "https://github.com/huggingface/transformers/pull/25590",
"diff_url": "https://github.com/huggingface/transformers/pull/25590.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25590.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25589
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25589/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25589/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25589/events
|
https://github.com/huggingface/transformers/pull/25589
| 1,856,655,925 |
PR_kwDOCUB6oc5YPvl6
| 25,589 |
fix z3 init when using accelerate launcher
|
{
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,692 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
# What does this PR do?
1. Corrects the z3 init when using accelerate launcher. Follow up of https://github.com/huggingface/transformers/pull/25227.
It missed setting correct mixed precision in the DS Plugin leading to `RuntimeError: output tensor must have the same type as input tensor` when using Accelerate launcher to run DeepSpeed ZeRO3 training with Trainer. This PR fixes that issue.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25589/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25589/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25589",
"html_url": "https://github.com/huggingface/transformers/pull/25589",
"diff_url": "https://github.com/huggingface/transformers/pull/25589.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25589.patch",
"merged_at": 1692367038000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25588
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25588/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25588/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25588/events
|
https://github.com/huggingface/transformers/pull/25588
| 1,856,428,006 |
PR_kwDOCUB6oc5YO9T7
| 25,588 |
Run doctest for new files
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ArthurZucker also mentioned `a` to me. I am happy with that, but we have to be a bit careful not to include files that has no doc example in that opposite list (if we want to use that list as an indication of **missing** doctest - it depends what we interpret it and use it)",
"Added a `not_doctested.txt`. This list needs some more work, in particular `src/transformers/pipelines`. We should remove them out from this file so we can test them - but we have to make sure they work first!\r\n\r\n(so far on `main`, they are not doctested)\r\n\r\nI would prefer to separate that work from this PR.",
"I am going to merge this PR without removing `utils/documentation_tests.txt` yet. It's still used by the doctest workflow file, and changing that file to use the opposite logic needs some extra work.\r\n\r\nWill do it in a follow up PR.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25588). All of your documentation changes will be reflected on that endpoint."
] | 1,692 | 1,692 | 1,692 |
COLLABORATOR
| null |
# What does this PR do?
The effect could be see in [this run](https://app.circleci.com/pipelines/github/huggingface/transformers/70724/workflows/f9ba5d54-6d64-4630-b500-ba1cf28baade/jobs/887892/artifacts) (see the artifact).
From Sir @ArthurZucker
> new files should always be tested
> the documentation_tests.txt is only for legacy models taht are not tested
> but whenever someone adds something, (a new pipeline, a new model etc) should be automatic
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25588/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25588/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25588",
"html_url": "https://github.com/huggingface/transformers/pull/25588",
"diff_url": "https://github.com/huggingface/transformers/pull/25588.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25588.patch",
"merged_at": 1692608918000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25587
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25587/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25587/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25587/events
|
https://github.com/huggingface/transformers/pull/25587
| 1,856,298,970 |
PR_kwDOCUB6oc5YOg_9
| 25,587 |
correct TTS pipeline docstrings snippet
|
{
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey @ArthurZucker , thanks for the quick review !\r\n\r\nJust a QQ before addressing your comments, [there is already a line](https://github.com/ylacombe/transformers/blob/b64cccafd178f7bbf83da2a75c8697836d60f533/utils/documentation_tests.txt#L491-L492) `src/transformers/pipelines/` in `documentation_tests.txt`.\r\n\r\nI'm thus afraid to be redundant if I add `src/transformers/pipelines/text_to_audio.py` in `documentation_tests.txt`. WDYT?\r\n\r\n ",
"Let me check, probably it's a logic error in test fetcher.",
"cc @ydshieh as he suggested this. If it's there it should indeed be tested and no need to add it again. ",
"Can you also check the `empty` CI @ydshieh ",
"well, that line is added after the new doc test logic was merged.\r\n\r\nFor this PR, let's just add the full path to the file to `documentation_tests.txt` and move on to merge this.\r\nWill address the test fetcher later.",
"empty CI is expected as no code is changed - if I understand correctly",
"> For this PR, let's just add the full path to the file to documentation_tests.txt and move on to merge this.\r\n\r\nIt's done now!",
"@ylacombe Let's merge this unless something else to be done?",
"@ydshieh I agree, can you do it?"
] | 1,692 | 1,692 | 1,692 |
COLLABORATOR
| null |
This PR aims to correct the TTS pipeline code snippet [that was failing](https://github.com/huggingface/transformers/actions/runs/5898104885/job/15998670461).
I simply wrote a correct snippet instead!
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Hey @ydshieh and @ArthurZucker, WDYT of that ?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25587/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25587/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25587",
"html_url": "https://github.com/huggingface/transformers/pull/25587",
"diff_url": "https://github.com/huggingface/transformers/pull/25587.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25587.patch",
"merged_at": 1692618004000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25586
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25586/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25586/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25586/events
|
https://github.com/huggingface/transformers/issues/25586
| 1,856,207,291 |
I_kwDOCUB6oc5uo3m7
| 25,586 |
SwahBERT
|
{
"login": "Adilatec",
"id": 130484182,
"node_id": "U_kgDOB8cH1g",
"avatar_url": "https://avatars.githubusercontent.com/u/130484182?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Adilatec",
"html_url": "https://github.com/Adilatec",
"followers_url": "https://api.github.com/users/Adilatec/followers",
"following_url": "https://api.github.com/users/Adilatec/following{/other_user}",
"gists_url": "https://api.github.com/users/Adilatec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Adilatec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Adilatec/subscriptions",
"organizations_url": "https://api.github.com/users/Adilatec/orgs",
"repos_url": "https://api.github.com/users/Adilatec/repos",
"events_url": "https://api.github.com/users/Adilatec/events{/privacy}",
"received_events_url": "https://api.github.com/users/Adilatec/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false | null |
[] |
[
"Hi @Adilatec Thank you for this request.\r\n\r\nThis model is better suited being as a model on the Hub. \r\n\r\nIf you want to contribute, you can follow [this tutorial](https://huggingface.co/docs/transformers/custom_models#sharing-custom-models). 🤗 "
] | 1,692 | 1,692 | null |
NONE
| null |
### Model description
<html>
<body>
<!--StartFragment--><h1 tabindex="-1" dir="auto" style="box-sizing: border-box; font-size: 2em; margin-top: 0px !important; margin-right: 0px; margin-bottom: 16px; margin-left: 0px; font-weight: var(--base-text-weight-semibold, 600); line-height: 1.25; padding-bottom: 0.3em; border-bottom: 1px solid var(--borderColor-muted, var(--color-border-muted)); color: rgb(31, 35, 40); font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", "Noto Sans", Helvetica, Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji"; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; background-color: rgb(255, 255, 255); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;">SwahBERT: Language model of Swahili</h1><p dir="auto" style="box-sizing: border-box; margin-top: 0px; margin-bottom: 16px; color: rgb(31, 35, 40); font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", "Noto Sans", Helvetica, Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji"; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; background-color: rgb(255, 255, 255); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;">Is a pretrained monolingual language model for Swahili.<br style="box-sizing: border-box;">The model was trained for 800K steps using a corpus of 105MB that was collected from news sites, online discussion, and Wikipedia.<br style="box-sizing: border-box;">The evaluation was perfomed on several downstream tasks such as emotion classification, news classification, sentiment classification, and Named entity recognition.</p><div class="highlight highlight-source-ruby notranslate position-relative overflow-auto" dir="auto" style="box-sizing: border-box; position: relative !important; overflow: auto !important; margin-bottom: 16px; color: rgb(31, 35, 40); font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", "Noto Sans", Helvetica, Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji"; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; background-color: rgb(255, 255, 255); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;"><pre style="box-sizing: border-box; font-family: ui-monospace, SFMono-Regular, "SF Mono", Menlo, Consolas, "Liberation Mono", monospace; font-size: 13.6px; margin-top: 0px; margin-bottom: 0px; overflow-wrap: normal; padding: 16px; overflow: auto; line-height: 1.45; color: var(--fgColor-default, var(--color-fg-default)); background-color: var(--bgColor-muted, var(--color-canvas-subtle)); border-radius: 6px; word-break: normal;"><span class="pl-en" style="box-sizing: border-box; color: var(--color-prettylights-syntax-entity);">import</span> <span class="pl-en" style="box-sizing: border-box; color: var(--color-prettylights-syntax-entity);">torch</span>
<span class="pl-en" style="box-sizing: border-box; color: var(--color-prettylights-syntax-entity);">from</span> <span class="pl-en" style="box-sizing: border-box; color: var(--color-prettylights-syntax-entity);">transformers</span> <span class="pl-en" style="box-sizing: border-box; color: var(--color-prettylights-syntax-entity);">import</span> <span class="pl-v" style="box-sizing: border-box; color: var(--color-prettylights-syntax-variable);">BertTokenizer</span>
<span class="pl-s1" style="box-sizing: border-box;">tokenizer</span> <span class="pl-c1" style="box-sizing: border-box; color: var(--color-prettylights-syntax-constant);">=</span> <span class="pl-v" style="box-sizing: border-box; color: var(--color-prettylights-syntax-variable);">BertTokenizer</span><span class="pl-kos" style="box-sizing: border-box;">.</span><span class="pl-en" style="box-sizing: border-box; color: var(--color-prettylights-syntax-entity);">from_pretrained</span><span class="pl-kos" style="box-sizing: border-box;">(</span><span class="pl-s" style="box-sizing: border-box; color: var(--color-prettylights-syntax-string);">"swahbert-base-uncased"</span><span class="pl-kos" style="box-sizing: border-box;">)</span>
<span class="pl-c" style="box-sizing: border-box; color: var(--color-prettylights-syntax-comment);"># Tokenized input</span>
<span class="pl-s1" style="box-sizing: border-box;">text</span> <span class="pl-c1" style="box-sizing: border-box; color: var(--color-prettylights-syntax-constant);">=</span> <span class="pl-s" style="box-sizing: border-box; color: var(--color-prettylights-syntax-string);">"Mlima Kilimanjaro unapatikana Tanzania"</span>
<span class="pl-s1" style="box-sizing: border-box;">tokenized_text</span> <span class="pl-c1" style="box-sizing: border-box; color: var(--color-prettylights-syntax-constant);">=</span> <span class="pl-s1" style="box-sizing: border-box;">tokenizer</span><span class="pl-kos" style="box-sizing: border-box;">.</span><span class="pl-en" style="box-sizing: border-box; color: var(--color-prettylights-syntax-entity);">tokenize</span><span class="pl-kos" style="box-sizing: border-box;">(</span><span class="pl-s1" style="box-sizing: border-box;">text</span><span class="pl-kos" style="box-sizing: border-box;">)</span>
<span class="pl-en" style="box-sizing: border-box; color: var(--color-prettylights-syntax-entity);">SwahBERT</span> <span class="pl-c1" style="box-sizing: border-box; color: var(--color-prettylights-syntax-constant);">=></span> <span class="pl-kos" style="box-sizing: border-box;">[</span><span class="pl-s" style="box-sizing: border-box; color: var(--color-prettylights-syntax-string);">'mlima'</span><span class="pl-kos" style="box-sizing: border-box;">,</span> <span class="pl-s" style="box-sizing: border-box; color: var(--color-prettylights-syntax-string);">'kilimanjaro'</span><span class="pl-kos" style="box-sizing: border-box;">,</span> <span class="pl-s" style="box-sizing: border-box; color: var(--color-prettylights-syntax-string);">'unapatikana'</span><span class="pl-kos" style="box-sizing: border-box;">,</span> <span class="pl-s" style="box-sizing: border-box; color: var(--color-prettylights-syntax-string);">'tanzania'</span><span class="pl-kos" style="box-sizing: border-box;">]</span>
<span class="pl-en" style="box-sizing: border-box; color: var(--color-prettylights-syntax-entity);">mBERT</span> <span class="pl-c1" style="box-sizing: border-box; color: var(--color-prettylights-syntax-constant);">=></span> <span class="pl-kos" style="box-sizing: border-box;">[</span><span class="pl-s" style="box-sizing: border-box; color: var(--color-prettylights-syntax-string);">'ml'</span><span class="pl-kos" style="box-sizing: border-box;">,</span> <span class="pl-s" style="box-sizing: border-box; color: var(--color-prettylights-syntax-string);">'##ima'</span><span class="pl-kos" style="box-sizing: border-box;">,</span> <span class="pl-s" style="box-sizing: border-box; color: var(--color-prettylights-syntax-string);">'ki'</span><span class="pl-kos" style="box-sizing: border-box;">,</span> <span class="pl-s" style="box-sizing: border-box; color: var(--color-prettylights-syntax-string);">'##lima'</span><span class="pl-kos" style="box-sizing: border-box;">,</span> <span class="pl-s" style="box-sizing: border-box; color: var(--color-prettylights-syntax-string);">'##nja'</span><span class="pl-kos" style="box-sizing: border-box;">,</span> <span class="pl-s" style="box-sizing: border-box; color: var(--color-prettylights-syntax-string);">'##ro'</span><span class="pl-kos" style="box-sizing: border-box;">,</span> <span class="pl-s" style="box-sizing: border-box; color: var(--color-prettylights-syntax-string);">'una'</span><span class="pl-kos" style="box-sizing: border-box;">,</span> <span class="pl-s" style="box-sizing: border-box; color: var(--color-prettylights-syntax-string);">'##patikana'</span><span class="pl-kos" style="box-sizing: border-box;">,</span> <span class="pl-s" style="box-sizing: border-box; color: var(--color-prettylights-syntax-string);">'tan'</span><span class="pl-kos" style="box-sizing: border-box;">,</span> <span class="pl-s" style="box-sizing: border-box; color: var(--color-prettylights-syntax-string);">'##zania'</span><span class="pl-kos" style="box-sizing: border-box;">]</span></pre><div class="zeroclipboard-container position-absolute right-0 top-0" style="box-sizing: border-box; position: absolute !important; top: 0px !important; right: 0px !important;"><clipboard-copy aria-label="Copy" class="ClipboardButton btn js-clipboard-copy m-2 p-0 tooltipped-no-delay" data-copy-feedback="Copied!" data-tooltip-direction="w" value="import torch
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("swahbert-base-uncased")
# Tokenized input
text = "Mlima Kilimanjaro unapatikana Tanzania"
tokenized_text = tokenizer.tokenize(text)
SwahBERT => ['mlima', 'kilimanjaro', 'unapatikana', 'tanzania']
mBERT => ['ml', '##ima', 'ki', '##lima', '##nja', '##ro', 'una', '##patikana', 'tan', '##zania']
" tabindex="0" role="button" style="box-sizing: border-box; position: relative; display: inline-block; padding: 0px !important; font-size: 14px; font-weight: var(--base-text-weight-medium, 500); line-height: 20px; white-space: nowrap; vertical-align: middle; cursor: pointer; user-select: none; border-width: 1px; border-style: solid; border-color: var(--button-default-borderColor-rest, var(--color-btn-border)); border-image: initial; border-radius: 6px; appearance: none; color: var(--button-default-fgColor-rest, var(--color-btn-text)); background-color: var(--button-default-bgColor-rest, var(--color-btn-bg)); box-shadow: var(--button-default-shadow-resting, var(--color-btn-shadow)),var(--button-default-shadow-inset, var(--color-btn-inset-shadow)); transition: color 80ms cubic-bezier(0.33, 1, 0.68, 1) 0s, background-color, box-shadow, border-color; margin: var(--base-size-8, 8px) !important;"><svg aria-hidden="true" height="16" viewBox="0 0 16 16" version="1.1" width="16" data-view-component="true" class="octicon octicon-copy js-clipboard-copy-icon m-2"><path d="M0 6.75C0 5.784.784 5 1.75 5h1.5a.75.75 0 0 1 0 1.5h-1.5a.25.25 0 0 0-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 0 0 .25-.25v-1.5a.75.75 0 0 1 1.5 0v1.5A1.75 1.75 0 0 1 9.25 16h-7.5A1.75 1.75 0 0 1 0 14.25Z"></path><path d="M5 1.75C5 .784 5.784 0 6.75 0h7.5C15.216 0 16 .784 16 1.75v7.5A1.75 1.75 0 0 1 14.25 11h-7.5A1.75 1.75 0 0 1 5 9.25Zm1.75-.25a.25.25 0 0 0-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 0 0 .25-.25v-7.5a.25.25 0 0 0-.25-.25Z"></path></svg></clipboard-copy></div></div><h2 tabindex="-1" dir="auto" style="box-sizing: border-box; margin-top: 24px; margin-bottom: 16px; font-size: 1.5em; font-weight: var(--base-text-weight-semibold, 600); line-height: 1.25; padding-bottom: 0.3em; border-bottom: 1px solid var(--borderColor-muted, var(--color-border-muted)); color: rgb(31, 35, 40); font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", "Noto Sans", Helvetica, Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji"; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; background-color: rgb(255, 255, 255); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;"><a id="user-content-pre-training-data" class="anchor" aria-hidden="true" href="https://github.com/gatimartin/SwahBERT#pre-training-data" style="box-sizing: border-box; background-color: transparent; color: var(--fgColor-accent, var(--color-accent-fg)); text-decoration: none; float: left; padding-right: 4px; margin-left: -20px; line-height: 1; position: absolute;"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path d="m7.775 3.275 1.25-1.25a3.5 3.5 0 1 1 4.95 4.95l-2.5 2.5a3.5 3.5 0 0 1-4.95 0 .751.751 0 0 1 .018-1.042.751.751 0 0 1 1.042-.018 1.998 1.998 0 0 0 2.83 0l2.5-2.5a2.002 2.002 0 0 0-2.83-2.83l-1.25 1.25a.751.751 0 0 1-1.042-.018.751.751 0 0 1-.018-1.042Zm-4.69 9.64a1.998 1.998 0 0 0 2.83 0l1.25-1.25a.751.751 0 0 1 1.042.018.751.751 0 0 1 .018 1.042l-1.25 1.25a3.5 3.5 0 1 1-4.95-4.95l2.5-2.5a3.5 3.5 0 0 1 4.95 0 .751.751 0 0 1-.018 1.042.751.751 0 0 1-1.042.018 1.998 1.998 0 0 0-2.83 0l-2.5 2.5a1.998 1.998 0 0 0 0 2.83Z"></path></svg></a>Pre-training data</h2><p dir="auto" style="box-sizing: border-box; margin-top: 0px; margin-bottom: 16px; color: rgb(31, 35, 40); font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", "Noto Sans", Helvetica, Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji"; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; background-color: rgb(255, 255, 255); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;">The text was extracted from different sorces;<br style="box-sizing: border-box;"></p><ul dir="auto" style="box-sizing: border-box; padding-left: 2em; margin-top: 0px; margin-bottom: 16px; color: rgb(31, 35, 40); font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", "Noto Sans", Helvetica, Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji"; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; background-color: rgb(255, 255, 255); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;"><li style="box-sizing: border-box;">News sites:<span> </span><code style="box-sizing: border-box; font-family: ui-monospace, SFMono-Regular, "SF Mono", Menlo, Consolas, "Liberation Mono", monospace; font-size: 13.6px; padding: 0.2em 0.4em; margin: 0px; white-space: break-spaces; background-color: var(--bgColor-neutral-muted, var(--color-neutral-muted)); border-radius: 6px;">United Nations news, Voice of America (VoA), Deutsche Welle (DW) and taifaleo</code><br style="box-sizing: border-box;"></li><li style="box-sizing: border-box; margin-top: 0.25em;">Forums:<span> </span><code style="box-sizing: border-box; font-family: ui-monospace, SFMono-Regular, "SF Mono", Menlo, Consolas, "Liberation Mono", monospace; font-size: 13.6px; padding: 0.2em 0.4em; margin: 0px; white-space: break-spaces; background-color: var(--bgColor-neutral-muted, var(--color-neutral-muted)); border-radius: 6px;">JaiiForum</code><br style="box-sizing: border-box;"></li><li style="box-sizing: border-box; margin-top: 0.25em;"><code style="box-sizing: border-box; font-family: ui-monospace, SFMono-Regular, "SF Mono", Menlo, Consolas, "Liberation Mono", monospace; font-size: 13.6px; padding: 0.2em 0.4em; margin: 0px; white-space: break-spaces; background-color: var(--bgColor-neutral-muted, var(--color-neutral-muted)); border-radius: 6px;">Wikipedia</code>.</li></ul><h2 tabindex="-1" dir="auto" style="box-sizing: border-box; margin-top: 24px; margin-bottom: 16px; font-size: 1.5em; font-weight: var(--base-text-weight-semibold, 600); line-height: 1.25; padding-bottom: 0.3em; border-bottom: 1px solid var(--borderColor-muted, var(--color-border-muted)); color: rgb(31, 35, 40); font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", "Noto Sans", Helvetica, Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji"; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; background-color: rgb(255, 255, 255); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;"><a id="user-content-pre-trained-models" class="anchor" aria-hidden="true" href="https://github.com/gatimartin/SwahBERT#pre-trained-models" style="box-sizing: border-box; background-color: transparent; color: var(--fgColor-accent, var(--color-accent-fg)); text-decoration: none; float: left; padding-right: 4px; margin-left: -20px; line-height: 1; position: absolute;"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path d="m7.775 3.275 1.25-1.25a3.5 3.5 0 1 1 4.95 4.95l-2.5 2.5a3.5 3.5 0 0 1-4.95 0 .751.751 0 0 1 .018-1.042.751.751 0 0 1 1.042-.018 1.998 1.998 0 0 0 2.83 0l2.5-2.5a2.002 2.002 0 0 0-2.83-2.83l-1.25 1.25a.751.751 0 0 1-1.042-.018.751.751 0 0 1-.018-1.042Zm-4.69 9.64a1.998 1.998 0 0 0 2.83 0l1.25-1.25a.751.751 0 0 1 1.042.018.751.751 0 0 1 .018 1.042l-1.25 1.25a3.5 3.5 0 1 1-4.95-4.95l2.5-2.5a3.5 3.5 0 0 1 4.95 0 .751.751 0 0 1-.018 1.042.751.751 0 0 1-1.042.018 1.998 1.998 0 0 0-2.83 0l-2.5 2.5a1.998 1.998 0 0 0 0 2.83Z"></path></svg></a>Pre-trained Models</h2><p dir="auto" style="box-sizing: border-box; margin-top: 0px; margin-bottom: 16px; color: rgb(31, 35, 40); font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", "Noto Sans", Helvetica, Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji"; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; background-color: rgb(255, 255, 255); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;">Download the models here:<br style="box-sizing: border-box;"></p><ul dir="auto" style="box-sizing: border-box; padding-left: 2em; margin-top: 0px; margin-bottom: 16px; color: rgb(31, 35, 40); font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", "Noto Sans", Helvetica, Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji"; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; background-color: rgb(255, 255, 255); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;"><li style="box-sizing: border-box;"><strong style="box-sizing: border-box; font-weight: var(--base-text-weight-semibold, 600);"><a href="https://drive.google.com/drive/folders/1HZTCqxt93F5NcvgAWcbrXZammBPizdxF?usp=sharing" rel="nofollow" style="box-sizing: border-box; background-color: transparent; color: var(--fgColor-accent, var(--color-accent-fg)); text-decoration: none;"><code style="box-sizing: border-box; font-family: ui-monospace, SFMono-Regular, "SF Mono", Menlo, Consolas, "Liberation Mono", monospace; font-size: 13.6px; padding: 0.2em 0.4em; margin: 0px; white-space: break-spaces; background-color: var(--bgColor-neutral-muted, var(--color-neutral-muted)); border-radius: 6px;">SwahBERT-Base, Uncased</code></a></strong>:12-layer, 768-hidden, 12-heads , 124M parameters</li><li style="box-sizing: border-box; margin-top: 0.25em;"><strong style="box-sizing: border-box; font-weight: var(--base-text-weight-semibold, 600);"><a href="https://drive.google.com/drive/folders/1cCcPopqTyzY6AnH9quKcT9kG5zH7tgEZ?usp=sharing" rel="nofollow" style="box-sizing: border-box; background-color: transparent; color: var(--fgColor-accent, var(--color-accent-fg)); text-decoration: none;"><code style="box-sizing: border-box; font-family: ui-monospace, SFMono-Regular, "SF Mono", Menlo, Consolas, "Liberation Mono", monospace; font-size: 13.6px; padding: 0.2em 0.4em; margin: 0px; white-space: break-spaces; background-color: var(--bgColor-neutral-muted, var(--color-neutral-muted)); border-radius: 6px;">SwahBERT-Base, Cased</code></a></strong>:12-layer, 768-hidden, 12-heads , 111M parameters</li></ul>
Steps | vocab size | MLM acc | NSP acc | loss
-- | -- | -- | -- | --
800K | 50K (uncased) | 76.54 | 99.67 | 1.0667
800K | 32K (cased) | 76.94 | 99.33 | 1.0562
<h2 tabindex="-1" dir="auto" style="box-sizing: border-box; margin-top: 24px; margin-bottom: 16px; font-size: 1.5em; font-weight: var(--base-text-weight-semibold, 600); line-height: 1.25; padding-bottom: 0.3em; border-bottom: 1px solid var(--borderColor-muted, var(--color-border-muted)); color: rgb(31, 35, 40); font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", "Noto Sans", Helvetica, Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji"; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; background-color: rgb(255, 255, 255); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;"><a id="user-content-citation" class="anchor" aria-hidden="true" href="https://github.com/gatimartin/SwahBERT#citation" style="box-sizing: border-box; background-color: transparent; color: var(--fgColor-accent, var(--color-accent-fg)); text-decoration: none; float: left; padding-right: 4px; margin-left: -20px; line-height: 1; position: absolute;"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path d="m7.775 3.275 1.25-1.25a3.5 3.5 0 1 1 4.95 4.95l-2.5 2.5a3.5 3.5 0 0 1-4.95 0 .751.751 0 0 1 .018-1.042.751.751 0 0 1 1.042-.018 1.998 1.998 0 0 0 2.83 0l2.5-2.5a2.002 2.002 0 0 0-2.83-2.83l-1.25 1.25a.751.751 0 0 1-1.042-.018.751.751 0 0 1-.018-1.042Zm-4.69 9.64a1.998 1.998 0 0 0 2.83 0l1.25-1.25a.751.751 0 0 1 1.042.018.751.751 0 0 1 .018 1.042l-1.25 1.25a3.5 3.5 0 1 1-4.95-4.95l2.5-2.5a3.5 3.5 0 0 1 4.95 0 .751.751 0 0 1-.018 1.042.751.751 0 0 1-1.042.018 1.998 1.998 0 0 0-2.83 0l-2.5 2.5a1.998 1.998 0 0 0 0 2.83Z"></path></svg></a>Citation</h2><p dir="auto" style="box-sizing: border-box; margin-top: 0px; margin-bottom: 16px; color: rgb(31, 35, 40); font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", "Noto Sans", Helvetica, Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji"; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; background-color: rgb(255, 255, 255); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;">Please use the following citation if you use the model or dataset:</p><div class="snippet-clipboard-content notranslate position-relative overflow-auto" style="box-sizing: border-box; position: relative !important; overflow: auto !important; margin-bottom: 0px !important; color: rgb(31, 35, 40); font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", "Noto Sans", Helvetica, Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji"; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; background-color: rgb(255, 255, 255); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;"><pre class="notranslate" style="box-sizing: border-box; font-family: ui-monospace, SFMono-Regular, "SF Mono", Menlo, Consolas, "Liberation Mono", monospace; font-size: 13.6px; margin-top: 0px; margin-bottom: 16px; overflow-wrap: normal; padding: 16px; overflow: auto; line-height: 1.45; color: var(--fgColor-default, var(--color-fg-default)); background-color: var(--bgColor-muted, var(--color-canvas-subtle)); border-radius: 6px;"><code style="box-sizing: border-box; font-family: ui-monospace, SFMono-Regular, "SF Mono", Menlo, Consolas, "Liberation Mono", monospace; font-size: 13.6px; padding: 0px; margin: 0px; white-space: pre; background: transparent; border-radius: 6px; word-break: normal; border: 0px; display: inline; overflow: visible; line-height: inherit; overflow-wrap: normal;">@inproceedings{martin-etal-2022-swahbert,
title = "{S}wah{BERT}: Language Model of {S}wahili",
author = "Martin, Gati and Mswahili, Medard Edmund and Jeong, Young-Seob and Woo, Jiyoung",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.23",
pages = "303--313"
}</code></pre></div><!--EndFragment-->
</body>
</html>
https://github.com/gatimartin/SwahBERT/blob/main/README.mdSwahBERT: Language model of Swahili
Is a pretrained monolingual language model for Swahili.
The model was trained for 800K steps using a corpus of 105MB that was collected from news sites, online discussion, and Wikipedia.
The evaluation was perfomed on several downstream tasks such as emotion classification, news classification, sentiment classification, and Named entity recognition.
import torch
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("swahbert-base-uncased")
# Tokenized input
text = "Mlima Kilimanjaro unapatikana Tanzania"
tokenized_text = tokenizer.tokenize(text)
SwahBERT => ['mlima', 'kilimanjaro', 'unapatikana', 'tanzania']
mBERT => ['ml', '##ima', 'ki', '##lima', '##nja', '##ro', 'una', '##patikana', 'tan', '##zania']
Pre-training data
The text was extracted from different sorces;
News sites: United Nations news, Voice of America (VoA), Deutsche Welle (DW) and taifaleo
Forums: JaiiForum
Wikipedia.
Pre-trained Models
Download the models here:
[SwahBERT-Base, Uncased](https://drive.google.com/drive/folders/1HZTCqxt93F5NcvgAWcbrXZammBPizdxF?usp=sharing):12-layer, 768-hidden, 12-heads , 124M parameters
[SwahBERT-Base, Cased](https://drive.google.com/drive/folders/1cCcPopqTyzY6AnH9quKcT9kG5zH7tgEZ?usp=sharing):12-layer, 768-hidden, 12-heads , 111M parameters
Steps vocab size MLM acc NSP acc loss
800K 50K (uncased) 76.54 99.67 1.0667
800K 32K (cased) 76.94 99.33 1.0562
Emotion Dataset
We released the [Swahili emotion dataset](https://github.com/gatimartin/SwahBERT/tree/main/emotion_dataset).
The data consists of ~13K emotion annotated comments from social media platforms and translated English dataset.
The data is multi-label with six Ekman’s emotions: happy, surprise, sadness, fear, anger, and disgust or neutral.
Evaluation
The model was tested on four downstream tasks including our new emotion dataset
F1-score of language models on downstream tasks
Tasks SwahBERT SwahBERT_cased mBERT
Emotion 64.46 64.77 60.52
News 90.90 89.90 89.73
Sentiment 70.94 71.12 67.20
NER 88.50 88.60 89.36
Citation
Please use the following citation if you use the model or dataset:
@inproceedings{martin-etal-2022-swahbert,
title = "{S}wah{BERT}: Language Model of {S}wahili",
author = "Martin, Gati and Mswahili, Medard Edmund and Jeong, Young-Seob and Woo, Jiyoung",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.23",
pages = "303--313"
}
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Please use the following citation if you use the model or dataset:
@inproceedings{martin-etal-2022-swahbert,
title = "{S}wah{BERT}: Language Model of {S}wahili",
author = "Martin, Gati and Mswahili, Medard Edmund and Jeong, Young-Seob and Woo, Jiyoung",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.23",
pages = "303--313"
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25586/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25586/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/25585
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25585/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25585/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25585/events
|
https://github.com/huggingface/transformers/pull/25585
| 1,856,155,309 |
PR_kwDOCUB6oc5YOB2C
| 25,585 |
Added missing parenthesis in call to is_fsdp_enabled
|
{
"login": "marma",
"id": 144026,
"node_id": "MDQ6VXNlcjE0NDAyNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/144026?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marma",
"html_url": "https://github.com/marma",
"followers_url": "https://api.github.com/users/marma/followers",
"following_url": "https://api.github.com/users/marma/following{/other_user}",
"gists_url": "https://api.github.com/users/marma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marma/subscriptions",
"organizations_url": "https://api.github.com/users/marma/orgs",
"repos_url": "https://api.github.com/users/marma/repos",
"events_url": "https://api.github.com/users/marma/events{/privacy}",
"received_events_url": "https://api.github.com/users/marma/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25585). All of your documentation changes will be reflected on that endpoint."
] | 1,692 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
Calling function is_fsdp_enabled instead of checking if it is not None
# What does this PR do?
It adds the missing parenthesis
Fixes #25584
## Who can review?
@ArthurZucker @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25585/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25585/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25585",
"html_url": "https://github.com/huggingface/transformers/pull/25585",
"diff_url": "https://github.com/huggingface/transformers/pull/25585.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25585.patch",
"merged_at": 1692347566000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25584
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25584/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25584/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25584/events
|
https://github.com/huggingface/transformers/issues/25584
| 1,856,152,166 |
I_kwDOCUB6oc5uoqJm
| 25,584 |
MIssing parenthesis in call to is_fsdp_enabled?
|
{
"login": "marma",
"id": 144026,
"node_id": "MDQ6VXNlcjE0NDAyNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/144026?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marma",
"html_url": "https://github.com/marma",
"followers_url": "https://api.github.com/users/marma/followers",
"following_url": "https://api.github.com/users/marma/following{/other_user}",
"gists_url": "https://api.github.com/users/marma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marma/subscriptions",
"organizations_url": "https://api.github.com/users/marma/orgs",
"repos_url": "https://api.github.com/users/marma/repos",
"events_url": "https://api.github.com/users/marma/events{/privacy}",
"received_events_url": "https://api.github.com/users/marma/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"PR here https://github.com/huggingface/transformers/pull/25585",
"Thanks!"
] | 1,692 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.32.0.dev0
- Platform: Linux-5.15.0-1028-nvidia-x86_64-with-glibc2.10
- Python version: 3.8.13
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.2
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 1.12.0a0+8a1a93a (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] My own modified scripts
- [ ] The official example scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I do not have an example, but rather I found this when debugging another issue. Therefore I do not have a testcase and I am *assuming* that the following is an error.
```
if (
(is_deepspeed_zero3_enabled() or is_fsdp_enabled) # <----- HERE
and torch.distributed.is_initialized()
and torch.distributed.get_rank() > 0
):
map_location = "meta"
else:
map_location = "cpu"
```
### Expected behavior
I expect is_fsdp_enabled is intended to be called rather than tested if None
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25584/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25584/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25583
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25583/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25583/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25583/events
|
https://github.com/huggingface/transformers/pull/25583
| 1,856,051,356 |
PR_kwDOCUB6oc5YNral
| 25,583 |
Fix typo in example code
|
{
"login": "ameliereymond",
"id": 33789687,
"node_id": "MDQ6VXNlcjMzNzg5Njg3",
"avatar_url": "https://avatars.githubusercontent.com/u/33789687?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ameliereymond",
"html_url": "https://github.com/ameliereymond",
"followers_url": "https://api.github.com/users/ameliereymond/followers",
"following_url": "https://api.github.com/users/ameliereymond/following{/other_user}",
"gists_url": "https://api.github.com/users/ameliereymond/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ameliereymond/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ameliereymond/subscriptions",
"organizations_url": "https://api.github.com/users/ameliereymond/orgs",
"repos_url": "https://api.github.com/users/ameliereymond/repos",
"events_url": "https://api.github.com/users/ameliereymond/events{/privacy}",
"received_events_url": "https://api.github.com/users/ameliereymond/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25583). All of your documentation changes will be reflected on that endpoint."
] | 1,692 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes a typo in the docs
`lang_code_to_id("en_XX")` => `lang_code_to_id["en_XX"]`
lang_code_to_id is a dict
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25583/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25583/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25583",
"html_url": "https://github.com/huggingface/transformers/pull/25583",
"diff_url": "https://github.com/huggingface/transformers/pull/25583.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25583.patch",
"merged_at": 1692338339000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25582
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25582/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25582/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25582/events
|
https://github.com/huggingface/transformers/issues/25582
| 1,855,979,620 |
I_kwDOCUB6oc5uoABk
| 25,582 |
RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cpu
|
{
"login": "damugongzai",
"id": 43875460,
"node_id": "MDQ6VXNlcjQzODc1NDYw",
"avatar_url": "https://avatars.githubusercontent.com/u/43875460?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/damugongzai",
"html_url": "https://github.com/damugongzai",
"followers_url": "https://api.github.com/users/damugongzai/followers",
"following_url": "https://api.github.com/users/damugongzai/following{/other_user}",
"gists_url": "https://api.github.com/users/damugongzai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/damugongzai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/damugongzai/subscriptions",
"organizations_url": "https://api.github.com/users/damugongzai/orgs",
"repos_url": "https://api.github.com/users/damugongzai/repos",
"events_url": "https://api.github.com/users/damugongzai/events{/privacy}",
"received_events_url": "https://api.github.com/users/damugongzai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Can you show us the content of `test.json`? Thanks\r\ncc @pacman100 ",
"Hello, please use distributed launcher like `torchrun` or `deepspeed` or `accelerate launch`. Above it is trying to run data parallel with DeepSpeed config which is incorrect.",
"Here is the test.json\r\n```\r\n{\r\n\r\n \"gradient_clipping\": 1.0,\r\n \"fp16\": {\r\n \"initial_scale_power\": 16,\r\n \"enabled\": true\r\n },\r\n \"quantize_training\": {\r\n \"enabled\": true,\r\n \"quantize_verbose\": true,\r\n \"quantizer_kernel\": true,\r\n \"quantize_type\": \"symmetric\",\r\n \"quantize_bits\": {\r\n \"start_bits\": 16,\r\n \"target_bits\": 8\r\n },\r\n \"quantize_schedule\": {\r\n \"quantize_period\": 10,\r\n \"schedule_offset\": 0\r\n },\r\n \"quantize_groups\": 8,\r\n \"fp16_mixed_quantize\": {\r\n \"enabled\": false,\r\n \"quantize_change_ratio\": 0.001\r\n },\r\n \"eigenvalue\": {\r\n \"enabled\": true,\r\n \"verbose\": true,\r\n \"max_iter\": 50,\r\n \"tol\": 1e-2,\r\n \"stability\": 0,\r\n \"gas_boundary_resolution\": 1,\r\n \"layer_name\": \"bert.encoder.layer\",\r\n \"layer_num\": 12\r\n }\r\n },\r\n \"zero_optimization\": {\r\n \"stage\": 2,\r\n \"offload_optimizer\": {\r\n \"device\": \"cpu\",\r\n \"pin_memory\": true\r\n },\r\n \"allgather_partitions\": true,\r\n \"allgather_bucket_size\": 2e8,\r\n \"overlap_comm\": true,\r\n \"reduce_scatter\": true,\r\n \"reduce_bucket_size\": 2e8,\r\n \"contiguous_gradients\": true\r\n }\r\n}\r\n\r\n```",
"@pacman100 Thanks. But this is the official demo code given by the deepspeed, can you give a correct command to use `torchrun `or `deepspeed` or `accelerate` launch",
"Here is the official code link referenced.\r\n<https://www.deepspeed.ai/tutorials/MoQ-tutorial/>",
"Please refer https://huggingface.co/docs/transformers/main_classes/deepspeed#deployment-with-multiple-gpus",
"Alternatively select a single GPU via `CUDA_VISIBLE_DEVICES=0` for working with the code link you shared",
"@pacman100 \r\nThank you for your patience! Now I successfully running the demo with\r\n `deepspeed --num_gpus=2 text-classification/run_glue.py --model_name_or_path bert-base-cased --task_name $TSK --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --output_dir ./tmp/$TSK/ --fp16 --warmup_steps 2 --deepspeed test.json --overwrite_output_dir`\r\nBut I find the output result and model size is not as expected. No matter how I change the config json, even setring the target bit to -1 as follows, the program still runs successfully.\r\n```\r\n{\r\n \"train_micro_batch_size_per_gpu\":\"auto\",\r\n \"gradient_clipping\": 1.0,\r\n \"fp16\": {\r\n \"initial_scale_power\": 16,\r\n \"enabled\": true\r\n },\r\n \"quantize_training\": {\r\n \"enabled\": true,\r\n \"quantize_verbose\": true,\r\n \"quantizer_kernel\": true,\r\n \"quantize-algo\": {\r\n \"q_type\": \"symmetric\"\r\n },\r\n \"quantize_bits\": {\r\n \"start_bits\": 16,\r\n \"target_bits\": -1\r\n },\r\n \"quantize_schedule\": {\r\n \"quantize_period\": 400,\r\n \"schedule_offset\": 0\r\n },\r\n \"quantize_groups\": 8\r\n },\r\n \"zero_optimization\": {\r\n \"stage\": 0\r\n },\r\n \"scheduler\": {\r\n \"type\": \"WarmupLR\",\r\n \"params\": {\r\n \"warmup_min_lr\": \"auto\",\r\n \"warmup_max_lr\": \"auto\",\r\n \"warmup_num_steps\": \"auto\"\r\n }\r\n }\r\n}\r\n```\r\nhere is the running result\r\n```\r\n***** train metrics *****\r\n epoch = 3.0\r\n train_loss = 0.4933\r\n train_runtime = 0:01:19.65\r\n train_samples = 3668\r\n train_samples_per_second = 138.14\r\n train_steps_per_second = 2.184\r\n08/18/2023 15:37:19 - INFO - __main__ - *** Evaluate ***\r\n[INFO|trainer.py:752] 2023-08-18 15:37:19,465 >> The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: sentence2, sentence1, idx. If sentence2, sentence1, idx are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.\r\n[INFO|trainer.py:3110] 2023-08-18 15:37:19,471 >> ***** Running Evaluation *****\r\n[INFO|trainer.py:3112] 2023-08-18 15:37:19,471 >> Num examples = 408\r\n[INFO|trainer.py:3115] 2023-08-18 15:37:19,471 >> Batch size = 8\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 26/26 [00:00<00:00, 26.59it/s]\r\n***** eval metrics *****\r\n epoch = 3.0\r\n eval_accuracy = 0.8358\r\n eval_combined_score = 0.8604\r\n eval_f1 = 0.8851\r\n eval_loss = 0.3889\r\n eval_runtime = 0:00:01.02\r\n eval_samples = 408\r\n eval_samples_per_second = 399.22\r\n eval_steps_per_second = 25.44\r\n[2023-08-18 15:37:21,428] [INFO] [launch.py:347:main] Process 3023907 exits successfully.\r\n[2023-08-18 15:37:23,430] [INFO] [launch.py:347:main] Process 3023906 exits successfully.\r\n```\r\nAnd the model size is equal as the model which not use deepspeed MOQ algorithm like running the command by \r\n`python text-classification/run_glue.py \\\r\n --model_name_or_path bert-base-cased \\\r\n --task_name $TSK \\\r\n --do_train \\\r\n --do_eval \\\r\n --max_seq_length 128 \\\r\n --per_device_train_batch_size 32 \\\r\n --learning_rate 2e-5 \\\r\n --num_train_epochs 3 \\\r\n --output_dir /tmp/$TSK/ \\\r\n --fp16 \\\r\n --warmup_steps 2`\r\nTherefore I doubt it really uses the MOQ algorithm",
"Hello, can you raise the issue with the DeepSpeed team as this is not a problem with integration?",
"@pacman100 OK,thanks"
] | 1,692 | 1,692 | 1,692 |
NONE
| null |
### System Info
# System Info
python: 3.10
transformers: 4.32.0.dev0
deepspeed: 0.10.0
GPUs: 8 x titanxp
# Reproduction
running the demo to use the deepspeed MOQ algorithm
`python text-classification/run_glue.py --model_name_or_path bert-base-cased --task_name $TSK --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --output_dir ./tmp/$TSK/ --fp16 --warmup_steps 2 --deepspeed test.json`
meet the bug
```
Traceback (most recent call last):
File "/mnt/cephfs/home/zhengzekang/python/project/transformers/examples/pytorch/text-classification/run_glue.py", line 648, in <module>
main()
File "/mnt/cephfs/home/zhengzekang/python/project/transformers/examples/pytorch/text-classification/run_glue.py", line 556, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/mnt/cephfs/home/zhengzekang/anaconda3/envs/deepspeed/lib/python3.10/site-packages/transformers/trainer.py", line 1546, in train
return inner_training_loop(
File "/mnt/cephfs/home/zhengzekang/anaconda3/envs/deepspeed/lib/python3.10/site-packages/transformers/trainer.py", line 1830, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/mnt/cephfs/home/zhengzekang/anaconda3/envs/deepspeed/lib/python3.10/site-packages/transformers/trainer.py", line 2676, in training_step
loss = self.compute_loss(model, inputs)
File "/mnt/cephfs/home/zhengzekang/anaconda3/envs/deepspeed/lib/python3.10/site-packages/transformers/trainer.py", line 2701, in compute_loss
outputs = model(**inputs)
File "/mnt/cephfs/home/zhengzekang/anaconda3/envs/deepspeed/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/cephfs/home/zhengzekang/anaconda3/envs/deepspeed/lib/python3.10/site-packages/torch/nn/parallel/data_parallel.py", line 154, in forward
raise RuntimeError("module must have its parameters and buffers "
RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cpu
```
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
running the `python text-classification/run_glue.py --model_name_or_path bert-base-cased --task_name $TSK --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --output_dir ./tmp/$TSK/ --fp16 --warmup_steps 2 --deepspeed test.json`
### Expected behavior
thank you for your helping
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25582/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25582/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25581
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25581/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25581/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25581/events
|
https://github.com/huggingface/transformers/pull/25581
| 1,855,868,461 |
PR_kwDOCUB6oc5YNEnW
| 25,581 |
Skip warning if tracing with dynamo
|
{
"login": "angelayi",
"id": 10901756,
"node_id": "MDQ6VXNlcjEwOTAxNzU2",
"avatar_url": "https://avatars.githubusercontent.com/u/10901756?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/angelayi",
"html_url": "https://github.com/angelayi",
"followers_url": "https://api.github.com/users/angelayi/followers",
"following_url": "https://api.github.com/users/angelayi/following{/other_user}",
"gists_url": "https://api.github.com/users/angelayi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/angelayi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/angelayi/subscriptions",
"organizations_url": "https://api.github.com/users/angelayi/orgs",
"repos_url": "https://api.github.com/users/angelayi/repos",
"events_url": "https://api.github.com/users/angelayi/events{/privacy}",
"received_events_url": "https://api.github.com/users/angelayi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Thanks for fixing this (I assume this might have broken a workflow somewhere). \r\n\r\nThe compilation is failing, I think we need to check if torch dynamo is available first using is_torchdynamo_available()? If so, it also \"import torch._dynamo as dynamo\" and so should be able to use dynamo.is_compiling() afterwards.",
"cc @fxmarty ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25581). All of your documentation changes will be reflected on that endpoint.",
"`_dynamo` is not visible in the `torch` namespace, so you need a `import torch._dynamo`. This PR breaks compatibility with torch<=1.13.1 though.",
"Thanks for the quick review! I updated the diff with checking to see if the `torch._dynamo` module is importable and if it's available. Does this fix the issue with breaking compatibility with torch<=1.13.1?",
"@fxmarty could I get a review on this PR?",
"Thanks.\r\n\r\nI figured I'd give my 2 cents even though I'm not a core reviewer. It'd probably be better to put the checking logic inside a helper method in the import_utils, next to is_torchdynamo_available():\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/utils/import_utils.py\r\n\r\nThere are other locations with logic to detect tracing, and the helper method will help us incorporate dynamo tracing to those locations as well.",
"@ArthurZucker @fxmarty do you know who else would be free to review this? ",
"@fxmarty thanks for the review! I noticed there's a \"1 workflow awaiting approval\" -- do you mind approving this too? ",
"@ArthurZucker Updated with a test that checks if torch compiling with dynamic shapes will run successfully and not result in a graph break.",
"Thanks for the review! I updated the test w/ a better name. Dumb question..how do I merge this 😅 ",
"I'll merge for you 😉 "
] | 1,692 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
# What does this PR do?
Adds an additional check to skip the check for missing attention mask when tracing. Currently it checks if we're torch.jit.trace-ing or torch.fx.symbolic_trace-ing, so this adds a check to see if we're tracing with TorchDynamo.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Tagging @hackyon who added this check previously
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25581/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25581/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25581",
"html_url": "https://github.com/huggingface/transformers/pull/25581",
"diff_url": "https://github.com/huggingface/transformers/pull/25581.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25581.patch",
"merged_at": 1694200414000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25580
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25580/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25580/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25580/events
|
https://github.com/huggingface/transformers/issues/25580
| 1,855,852,572 |
I_kwDOCUB6oc5unhAc
| 25,580 |
when using FSDP, Trainer initializes the optimizer in a wrong order that results in huge waste of GPU memory
|
{
"login": "yundai424",
"id": 43726198,
"node_id": "MDQ6VXNlcjQzNzI2MTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/43726198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yundai424",
"html_url": "https://github.com/yundai424",
"followers_url": "https://api.github.com/users/yundai424/followers",
"following_url": "https://api.github.com/users/yundai424/following{/other_user}",
"gists_url": "https://api.github.com/users/yundai424/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yundai424/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yundai424/subscriptions",
"organizations_url": "https://api.github.com/users/yundai424/orgs",
"repos_url": "https://api.github.com/users/yundai424/repos",
"events_url": "https://api.github.com/users/yundai424/events{/privacy}",
"received_events_url": "https://api.github.com/users/yundai424/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"CC @pacman100 ",
"This has already been fixed. Please use the main branch of Transformers.",
"thanks @pacman100 i found the PR that fixed this. Closing this issue"
] | 1,692 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.15.111.1-rolling-lts-linkedin-x86_64-with-glibc2.17
- Python version: 3.10.2
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0a0+gitf998869 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: 8 A100 GPUs
- Using distributed or parallel set-up in script?: Nah just `torchrun`
### Who can help?
trainer: @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
**Summary**:
When using FSDP (non-XLA), it'll first create the optimizer using the original full model (without being wrapped into FSDP) [here](https://github.com/huggingface/transformers/blob/v4.31.0/src/transformers/trainer.py#L1647), and then it wraps the model w/ FSDP [here](https://github.com/huggingface/transformers/blob/v4.31.0/src/transformers/trainer.py#L1656)... This results in a **huge waste of GPU memory as there'll be a ton of dangling non-useful optimizer states associated with the model params that don't belong to this shard**; While it should instead first create model and wrap it with FSDP, and initialize optimizer from the FSDP-wrapped model.
I noticed this issue when finetuning llama2-7b model using alpaca dataset. Here is the colab that has the code example for reproduction (The colab can't be executed directly out of the box; Just a way to put all the code into a sharable link): https://colab.research.google.com/drive/1Rj9jDuOjmnZUiEUZwEmLHoL03pGLqv7V#scrollTo=eFfSsG756poB
Another indication is that you'll see a warning thrown by `accelerate`:
>FSDP Warning: When using FSDP, it is efficient and recommended to call prepare for the model before creating the optimizer
And `accelerate` also notes down this caveat and suggests the same thing https://huggingface.co/docs/accelerate/usage_guides/fsdp
### Expected behavior
It should instead first create model, and then wrap it with FSDP, and finally initialize optimizer from the FSDP-wrapped model. I modified the source code of `transformers` a bit to make it work as expected (don't initialize optimizer until model is wrapped by `accelerate` with FSDP), I notice a reduction in GPU memory usage by **25Gi** even before any training step begins.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25580/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25580/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25579
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25579/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25579/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25579/events
|
https://github.com/huggingface/transformers/pull/25579
| 1,855,758,160 |
PR_kwDOCUB6oc5YMsp1
| 25,579 |
Ported Dinov2 to flax
|
{
"login": "ifeherva",
"id": 3716849,
"node_id": "MDQ6VXNlcjM3MTY4NDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3716849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ifeherva",
"html_url": "https://github.com/ifeherva",
"followers_url": "https://api.github.com/users/ifeherva/followers",
"following_url": "https://api.github.com/users/ifeherva/following{/other_user}",
"gists_url": "https://api.github.com/users/ifeherva/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ifeherva/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ifeherva/subscriptions",
"organizations_url": "https://api.github.com/users/ifeherva/orgs",
"repos_url": "https://api.github.com/users/ifeherva/repos",
"events_url": "https://api.github.com/users/ifeherva/events{/privacy}",
"received_events_url": "https://api.github.com/users/ifeherva/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25579). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hey @ifeherva! Given you made a great start on this PR, it would be super nice if you were able to see it to completion! On hand to help with any questions/queries 🤗 Of course if you are busy, there's no pressure to finish this. In this case, we can open it up to the community to see if anyone is able to finish the integration so that this work is merged into main"
] | 1,692 | 1,698 | 1,697 |
NONE
| null |
# Ported the Dinov2 model to jax/flax
This PR adds the dinov2 model in flax. It is based on the vit flax port but uses the existing pytorch dinov2 as base.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
@sanchit-gandhi @amyeroberts
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25579/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25579/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25579",
"html_url": "https://github.com/huggingface/transformers/pull/25579",
"diff_url": "https://github.com/huggingface/transformers/pull/25579.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25579.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25578
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25578/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25578/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25578/events
|
https://github.com/huggingface/transformers/pull/25578
| 1,855,609,045 |
PR_kwDOCUB6oc5YMMkZ
| 25,578 |
Support specifying revision in push_to_hub
|
{
"login": "jmif",
"id": 1000442,
"node_id": "MDQ6VXNlcjEwMDA0NDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1000442?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmif",
"html_url": "https://github.com/jmif",
"followers_url": "https://api.github.com/users/jmif/followers",
"following_url": "https://api.github.com/users/jmif/following{/other_user}",
"gists_url": "https://api.github.com/users/jmif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmif/subscriptions",
"organizations_url": "https://api.github.com/users/jmif/orgs",
"repos_url": "https://api.github.com/users/jmif/repos",
"events_url": "https://api.github.com/users/jmif/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmif/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @sgugger ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25578). All of your documentation changes will be reflected on that endpoint.",
"Thanks again!",
"@jmif what is the syntax for using the revision?\r\n\r\nI tried:\r\n```\r\nmodel.push_to_hub(adapter_model_name, revision=\"GPTQ\")\r\n```\r\nbut that just pushed to main.",
"@RonanKMcGovern could you open a new issue with the output of `transformers-cli env` and a snippet for reproduction? "
] | 1,692 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
Resolves https://github.com/huggingface/transformers/issues/22867 by adding revision to `create_commit` call and proactively creating the branch before committing.
Change hasn't been discussed or approved in an issue AFAIK. Haven't written tests but happy to once the general approach is approved. I've tested manually and the changes work.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25578/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25578/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25578",
"html_url": "https://github.com/huggingface/transformers/pull/25578",
"diff_url": "https://github.com/huggingface/transformers/pull/25578.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25578.patch",
"merged_at": 1692683736000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25577
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25577/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25577/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25577/events
|
https://github.com/huggingface/transformers/issues/25577
| 1,855,374,084 |
I_kwDOCUB6oc5ulsME
| 25,577 |
more and more difficult to use
|
{
"login": "Misoknisky",
"id": 12208899,
"node_id": "MDQ6VXNlcjEyMjA4ODk5",
"avatar_url": "https://avatars.githubusercontent.com/u/12208899?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Misoknisky",
"html_url": "https://github.com/Misoknisky",
"followers_url": "https://api.github.com/users/Misoknisky/followers",
"following_url": "https://api.github.com/users/Misoknisky/following{/other_user}",
"gists_url": "https://api.github.com/users/Misoknisky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Misoknisky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Misoknisky/subscriptions",
"organizations_url": "https://api.github.com/users/Misoknisky/orgs",
"repos_url": "https://api.github.com/users/Misoknisky/repos",
"events_url": "https://api.github.com/users/Misoknisky/events{/privacy}",
"received_events_url": "https://api.github.com/users/Misoknisky/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey! Thanks for providing feedback, though this is the `transformers` repository, `datasets` has its own repo [here](https://github.com/huggingface/datasets) if you want to give them your thoughts! "
] | 1,692 | 1,692 | 1,692 |
NONE
| null |
### System Info
huggingface is going the same way as TensorFlow, version control is getting worse and worse. In particular, the versions between different packages need to be strictly adapted, which is more troublesome than TensorFlow!
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Extremely bad experience! When I used the datasets library to load the glue dataset, I found that the datasets library had a chain reaction with hub, evaluate, pandas, etc., and various conflicts did not match. The data formats provided by different hubs and other libraries are not compatible, and it is really a headache to adapt them.
### Expected behavior
Extremely bad experience! When I used the datasets library to load the glue dataset, I found that the datasets library had a chain reaction with hub, evaluate, pandas, etc., and various conflicts did not match. The data formats provided by different hubs and other libraries are not compatible, and it is really a headache to adapt them.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25577/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25577/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25576
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25576/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25576/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25576/events
|
https://github.com/huggingface/transformers/issues/25576
| 1,855,339,172 |
I_kwDOCUB6oc5uljqk
| 25,576 |
How can i make a PR for autotokenzier to adapt RWKV world
|
{
"login": "xiaol",
"id": 1669515,
"node_id": "MDQ6VXNlcjE2Njk1MTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1669515?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiaol",
"html_url": "https://github.com/xiaol",
"followers_url": "https://api.github.com/users/xiaol/followers",
"following_url": "https://api.github.com/users/xiaol/following{/other_user}",
"gists_url": "https://api.github.com/users/xiaol/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiaol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiaol/subscriptions",
"organizations_url": "https://api.github.com/users/xiaol/orgs",
"repos_url": "https://api.github.com/users/xiaol/repos",
"events_url": "https://api.github.com/users/xiaol/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiaol/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey! This kind of question should be ask on [the forum](https://discuss.huggingface.co/).\r\nHowever here are two different path: \r\n- Use the `GPTNeoXTokenizerFast`, see [here](https://huggingface.co/RWKV/rwkv-4-169m-pile/blob/main/tokenizer_config.json)\r\n- Host your tokenizer code on the hub using https://huggingface.co/docs/transformers/custom_models\r\n\r\nI don't really know if the tokenizer needed specific steps for conversion cc @sgugger ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,692 | 1,695 | 1,695 |
NONE
| null |
### Feature request
Ususally we use own tokenzier with the transformer pipeline,
like this https://github.com/xiaol/Huggingface-RWKV-World/blob/fca236afd5f2815b0dbe6c7ce3c92e51526e2e14/generate_hf_cfg.py#L79C1-L79C1
So far we have a lot of models using new tokenzier, using pipeline with autotokenizer is critically needed.
How can i add new tokenizer to autotokenzier to make this pipeline smooth and peace.
Thank you.
### Motivation
1. make everyone use RWKV world smoothly, and RWKV v5 world is coming.
2. can support huggingface communtiy with this awesome models , make opensource more open.
3. i really don't like llama models always on the top of open llm leardboards.
4. more...
### Your contribution
I made a lots of models based on RWKV 4 world ,https://huggingface.co/xiaol , especially 128k context models.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25576/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25576/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25575
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25575/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25575/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25575/events
|
https://github.com/huggingface/transformers/pull/25575
| 1,855,267,870 |
PR_kwDOCUB6oc5YLBu0
| 25,575 |
add warning for 8bit optimizers
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,692 | 1,692 | 1,692 |
MEMBER
| null |
# What does this PR do ?
This PR adds a log when 8-bit optimizers are used with a version of bitsandbytes < 0.41.1. We do that because a major bug was fixed for 8-bit optimizers as reported by [Tim](https://twitter.com/Tim_Dettmers/status/1687458541643390976).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25575/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25575/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25575",
"html_url": "https://github.com/huggingface/transformers/pull/25575",
"diff_url": "https://github.com/huggingface/transformers/pull/25575.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25575.patch",
"merged_at": 1692298138000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25574
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25574/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25574/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25574/events
|
https://github.com/huggingface/transformers/pull/25574
| 1,855,254,056 |
PR_kwDOCUB6oc5YK-uX
| 25,574 |
Skip `test_contrastive_generate` for `TFXLNet`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"For the record, it starts to fail on commit `a6c850e4` (#20994) but with a different error `TypeError: prepare_inputs_for_generation() got multiple values for keyword argument 'use_cache'`. Then in #21149, we get the current error."
] | 1,692 | 1,692 | 1,692 |
COLLABORATOR
| null |
# What does this PR do?
This model has special cache mechanism and `test_contrastive_generate` fails for more than 8 month. When I remove the argument `penalty_alpha` from the kwargs, it is able to generate something.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25574/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25574/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25574",
"html_url": "https://github.com/huggingface/transformers/pull/25574",
"diff_url": "https://github.com/huggingface/transformers/pull/25574.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25574.patch",
"merged_at": 1692291395000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25573
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25573/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25573/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25573/events
|
https://github.com/huggingface/transformers/pull/25573
| 1,855,225,480 |
PR_kwDOCUB6oc5YK4cw
| 25,573 |
Revert "change version (#25387)"
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,692 | 1,692 | 1,692 |
MEMBER
| null |
# What does this PR do?
This PR revert #25387. As @younesbelkada pointed [out](https://github.com/huggingface/transformers/pull/25387#pullrequestreview-1582170053), the training still works as expected if we don't use 8-bit optimizers. I will propose a fix in a follow-up PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25573/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25573/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25573",
"html_url": "https://github.com/huggingface/transformers/pull/25573",
"diff_url": "https://github.com/huggingface/transformers/pull/25573.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25573.patch",
"merged_at": 1692287042000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25572
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25572/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25572/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25572/events
|
https://github.com/huggingface/transformers/issues/25572
| 1,855,222,332 |
I_kwDOCUB6oc5ulHI8
| 25,572 |
Does lora caused memory leak in transformers ?
|
{
"login": "Randolph-zeng",
"id": 11933185,
"node_id": "MDQ6VXNlcjExOTMzMTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/11933185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Randolph-zeng",
"html_url": "https://github.com/Randolph-zeng",
"followers_url": "https://api.github.com/users/Randolph-zeng/followers",
"following_url": "https://api.github.com/users/Randolph-zeng/following{/other_user}",
"gists_url": "https://api.github.com/users/Randolph-zeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Randolph-zeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Randolph-zeng/subscriptions",
"organizations_url": "https://api.github.com/users/Randolph-zeng/orgs",
"repos_url": "https://api.github.com/users/Randolph-zeng/repos",
"events_url": "https://api.github.com/users/Randolph-zeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/Randolph-zeng/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @Randolph-zeng \r\nThanks for the issue, to clarify, do you get 13GB with load_in_8bit or without it?",
"hi @younesbelkada , the 13GB is with load_in_8bit on , and 51GB off. Is this an expected behavior or I miscalculated the math :(",
"@younesbelkada To add more to the picture, I actually took a long route to debug this. At first I thought it is the lora did not loaded correctly and the whole model was loaded and marked as trainable. However, the lora did take effect and the trainable params are indeed around 20 millions. When I use a debugger and torch.profiler, I found that there seems to be a memory leak and the memory consumption increases abnormally everytime it passed the attention layer (20millions of total trainable params across ALL layers should incurs additional 150+MBs of memory usage for EVERY layer). I thought this was a lora issue at the begininig but later I found out the load_in_8bit causes a big difference in the memory usage and thus posted the questions here instead. The lora side confirms to me that the lora module does not have any memory leak ...(https://github.com/huggingface/peft/issues/832#issuecomment-1682356876)",
"@younesbelkada I just found the same problem of memory leak using lora + llama2-7b",
"@younesbelkada @pacman100 @ArthurZucker The memory leak seems to be a serious issue. Can we get some insights on what is happening under the hood or maybe some hotfix we can apply by ourselves in the meantime ? Any help will be greatly appreciated ! ",
"@Randolph-zeng Thanks for your interest! I am building a multi-modal LLM with LLaMA-2 and LoRA; thus, my setting may differ slightly. Yesterday, I found that each time my code ran past the following forward function, the GPU memory increased by ~200MB, finally leading to CUDA OOM.\r\n```\r\noutputs = self.llama_model(\r\n inputs_embeds=inputs_embeds,\r\n attention_mask=attention_mask,\r\n return_dict=True,\r\n labels=target_ids,\r\n )\r\n```\r\nAfter debugging, I tried to detach and put all resulting tensors to the CPU and then clean the cuda empty cache, which solved this problem. Note that all these discussed above are during my model inference instead of training. (Yes, my model also need to call forward() instead of generate() during inference)\r\n\r\nHowever, the `load_in_8bit` seems still has errors. the forward() will return nan if `load_in_8bit` is `True`. So I just turn this option off temporarily.",
"Hi @zhuole1025 ! Thanks for your response and solution. And yes, I did observe similar increase pattern of memory usage when calling the forward, which is why I suspect there is some memory leak. \r\nLoad in 8 bit does solve the memory leak issue in my case which I think might due to the different settings of loading models and keeping reference of related variables. I think by manually detaching and clearing cache as you do should also resolve the issue. However, in my training case and my current lib versions, simply load in 8bit already does the work :)\r\nI think this is actually a serious issue that worth the attention of the HF team, hope they can have some spare time to look at this .",
"We have also experienced memory leaks in TRL that we have managed to solve by calling \r\n\r\n```python\r\nimport gc\r\n\r\ngc.collect()\r\ntorch.cuda.empty_cache()\r\ngc.collect()\r\n\r\n```\r\nAfter each training step: https://github.com/huggingface/trl/blob/main/trl/core.py#L266-L268\r\n\r\nBut not sure if it is related to PEFT + LoRA",
"Hello @Randolph-zeng,\r\n\r\nThere is no memory leakage. Let's take things step by step.\r\n\r\nWhen using PEFT with INT8/INT4 Quantization, by default gradient checkpointing is enabled. \r\n\r\nWhen using PEFT alone, let's put some numbers on the expected GPU memory usage:\r\n1. Model in FP32 would take 4bytes per params*7B params = 28GB.\r\n2. Let's add the space by LoRA params = 4bytes per params*0.022B params = 0.088GB\r\n3. Activations take up a huge GPU memory for larger batches and longer sequences. Refer paper [Reducing Activation Recomputation in Large Transformer Models](https://arxiv.org/pdf/2205.05198.pdf) for approximate derivation of memory occupied by activations. Below is the screenshot of the relevant snippet showing activation memory per layer:\r\n. \r\nThe above formula considers half-precision, so we need to approx double it for FP32. Applying it to your code leveraging micro_batch_size=4, hidden_dimension= 4096, sequence_length=256, num_layers=32, num_attention_heads=32 and adjusting the above formula to consider the activations of LoRA layers which takes `2(4sb(r+h)+sbh)` where lora_rank=8: it will be:\r\n```\r\n(2*(256*4*4096(34+((5*32*256)/4096)))+2*(256*4*4(8+4096)+256*4*4096)) * 32 layers = 12.25GB\r\n```\r\n4. Optimizer state and gradients: \r\nGradients in FP32 will take: 0.088GB same as the lora params\r\nOptimizer state: 0.176GB\r\n5. Total approx GPU memory usage: 28+0.088+12.25+0.088+0.176 = 40.602 GB\r\n6. Code: https://github.com/pacman100/DHS-LLM-Workshop/tree/main/personal_copilot/training. Running on single A100 80Gb GPU with the below command:\r\n```\r\npython train.py \\\r\n --model_path \"meta-llama/Llama-2-7b-chat-hf\" \\\r\n--dataset_name \"smangrul/hf-stack-v1\" \\\r\n--subset \"data\" \\\r\n--data_column \"content\" \\\r\n--split \"train\" \\\r\n--seq_length 256 \\\r\n--max_steps 2000 \\\r\n --batch_size 4 \\\r\n--gradient_accumulation_steps 1 \\\r\n--learning_rate 5e-5 \\\r\n--lr_scheduler_type \"cosine\" \\\r\n--weight_decay 0.01 \\\r\n--num_warmup_steps 30 \\\r\n--eval_freq 100 \\\r\n--save_freq 500 \\\r\n--log_freq 25 \\\r\n--num_workers 4 \\\r\n--no_fp16 \\\r\n--output_dir \"delete-me-llama7b-personal-copilot-A100-80GB\" \\\r\n--fim_rate 0.5 \\\r\n --fim_spm_rate 0.5 \\\r\n--use_peft_lora \\\r\n--lora_r 8 \\\r\n --lora_alpha 32 \\\r\n--lora_dropout 0.1 \\\r\n--lora_target_modules \"q_proj,v_proj\" \\\r\n--no_gradient_checkpointing\r\n```\r\n\r\noutput logs:\r\n```\r\nSize of the train set: 5875. Size of the validation set: 30\r\n100%|████████████████████████████████████████████████████████████████████████████████████████| 400/400 [00:10<00:00, 37.52it/s]\r\nThe character to token ratio of the dataset is: 2.88\r\nFIM is not supported by tokenizer, disabling FIM\r\nFIM is not supported by tokenizer, disabling FIM\r\nLoading the model\r\nDownloading shards: 100%|████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 9.91it/s]\r\nLoading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████| 2/2 [00:03<00:00, 1.65s/it]\r\nDownloading (…)neration_config.json: 100%|████████████████████████████████████████████████████| 188/188 [00:00<00:00, 1.40MB/s]\r\nPeftModelForCausalLM(\r\n (base_model): LoraModel(\r\n (model): LlamaForCausalLM(\r\n (model): LlamaModel(\r\n (embed_tokens): Embedding(32000, 4096)\r\n (layers): ModuleList(\r\n (0-31): 32 x LlamaDecoderLayer(\r\n (self_attn): LlamaAttention(\r\n (q_proj): Linear(\r\n in_features=4096, out_features=4096, bias=False\r\n (lora_dropout): ModuleDict(\r\n (default): Dropout(p=0.1, inplace=False)\r\n )\r\n (lora_A): ModuleDict(\r\n (default): Linear(in_features=4096, out_features=8, bias=False)\r\n )\r\n (lora_B): ModuleDict(\r\n (default): Linear(in_features=8, out_features=4096, bias=False)\r\n )\r\n (lora_embedding_A): ParameterDict()\r\n (lora_embedding_B): ParameterDict()\r\n )\r\n (k_proj): Linear(in_features=4096, out_features=4096, bias=False)\r\n (v_proj): Linear(\r\n in_features=4096, out_features=4096, bias=False\r\n (lora_dropout): ModuleDict(\r\n (default): Dropout(p=0.1, inplace=False)\r\n )\r\n (lora_A): ModuleDict(\r\n (default): Linear(in_features=4096, out_features=8, bias=False)\r\n )\r\n (lora_B): ModuleDict(\r\n (default): Linear(in_features=8, out_features=4096, bias=False)\r\n )\r\n (lora_embedding_A): ParameterDict()\r\n (lora_embedding_B): ParameterDict()\r\n )\r\n (o_proj): Linear(in_features=4096, out_features=4096, bias=False)\r\n (rotary_emb): LlamaRotaryEmbedding()\r\n )\r\n (mlp): LlamaMLP(\r\n (gate_proj): Linear(in_features=4096, out_features=11008, bias=False)\r\n (up_proj): Linear(in_features=4096, out_features=11008, bias=False)\r\n (down_proj): Linear(in_features=11008, out_features=4096, bias=False)\r\n (act_fn): SiLUActivation()\r\n )\r\n (input_layernorm): LlamaRMSNorm()\r\n (post_attention_layernorm): LlamaRMSNorm()\r\n )\r\n )\r\n (norm): LlamaRMSNorm()\r\n )\r\n (lm_head): Linear(in_features=4096, out_features=32000, bias=False)\r\n )\r\n )\r\n)\r\nStarting main loop\r\nUsing the `WANDB_DISABLED` environment variable is deprecated and will be removed in v5. Use the --report_to flag to control the integrations used for logging result (for instance --report_to none).\r\nCloning https://huggingface.co/smangrul/delete-me-llama7b-personal-copilot-A100-80GB into local empty directory.\r\nTraining...\r\n{'loss': 1.5655, 'learning_rate': 4.166666666666667e-05, 'epoch': 0.01} \r\n{'loss': 1.4344, 'learning_rate': 4.998728546500082e-05, 'epoch': 0.03} \r\n{'loss': 1.3379, 'learning_rate': 4.993565483078743e-05, 'epoch': 0.04} \r\n{'loss': 1.2521, 'learning_rate': 4.984439542929117e-05, 'epoch': 0.05} \r\n{'eval_loss': 1.3657349348068237, 'eval_runtime': 101.306, 'eval_samples_per_second': 4.6, 'eval_steps_per_second': 1.155, 'epoch': 0.05}\r\n{'loss': 1.27, 'learning_rate': 4.971365229370284e-05, 'epoch': 0.06} \r\n{'loss': 1.2335, 'learning_rate': 4.9543633206385834e-05, 'epoch': 0.07} \r\n 9%|███████▌ | 174/2000 [06:47<53:44, 1.77s/it]\r\n```\r\n\r\nMemory usage:\r\n\r\n\r\nSo, it is around **40GB** as per our calculations above. \r\n\r\nNow, enable gradient checkpoining to decrease the activations VRAM with below command:\r\n```\r\npython train.py \\\r\n --model_path \"meta-llama/Llama-2-7b-chat-hf\" \\\r\n--dataset_name \"smangrul/hf-stack-v1\" \\\r\n--subset \"data\" \\\r\n--data_column \"content\" \\\r\n--split \"train\" \\\r\n--seq_length 256 \\\r\n--max_steps 2000 \\\r\n --batch_size 4 \\\r\n--gradient_accumulation_steps 1 \\\r\n--learning_rate 5e-5 \\\r\n--lr_scheduler_type \"cosine\" \\\r\n--weight_decay 0.01 \\\r\n--num_warmup_steps 30 \\\r\n--eval_freq 100 \\\r\n--save_freq 500 \\\r\n--log_freq 25 \\\r\n--num_workers 4 \\\r\n--no_fp16 \\\r\n--output_dir \"delete-me-llama7b-personal-copilot-A100-80GB\" \\\r\n--fim_rate 0.5 \\\r\n --fim_spm_rate 0.5 \\\r\n--use_peft_lora \\\r\n--lora_r 8 \\\r\n --lora_alpha 32 \\\r\n--lora_dropout 0.1 \\\r\n--lora_target_modules \"q_proj,v_proj\" \r\n```\r\n\r\nMemory usage:\r\n\r\n\r\nWe can see that GPU memory usage reduced by ~**12GB** (40-28)\r\n\r\nBut as per the output logs, the training time increased **1.5X** due to recomputation during backward passes:\r\n```\r\nSize of the train set: 5875. Size of the validation set: 30\r\n100%|████████████████████████████████████████████████████████████████████████████████████████| 400/400 [00:10<00:00, 37.54it/s]\r\nThe character to token ratio of the dataset is: 2.88\r\nFIM is not supported by tokenizer, disabling FIM\r\nFIM is not supported by tokenizer, disabling FIM\r\nLoading the model\r\nLoading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████| 2/2 [00:01<00:00, 1.25it/s]\r\nPeftModelForCausalLM(\r\n (base_model): LoraModel(\r\n (model): LlamaForCausalLM(\r\n (model): LlamaModel(\r\n (embed_tokens): Embedding(32000, 4096)\r\n (layers): ModuleList(\r\n (0-31): 32 x LlamaDecoderLayer(\r\n (self_attn): LlamaAttention(\r\n (q_proj): Linear(\r\n in_features=4096, out_features=4096, bias=False\r\n (lora_dropout): ModuleDict(\r\n (default): Dropout(p=0.1, inplace=False)\r\n )\r\n (lora_A): ModuleDict(\r\n (default): Linear(in_features=4096, out_features=8, bias=False)\r\n )\r\n (lora_B): ModuleDict(\r\n (default): Linear(in_features=8, out_features=4096, bias=False)\r\n )\r\n (lora_embedding_A): ParameterDict()\r\n (lora_embedding_B): ParameterDict()\r\n )\r\n (k_proj): Linear(in_features=4096, out_features=4096, bias=False)\r\n (v_proj): Linear(\r\n in_features=4096, out_features=4096, bias=False\r\n (lora_dropout): ModuleDict(\r\n (default): Dropout(p=0.1, inplace=False)\r\n )\r\n (lora_A): ModuleDict(\r\n (default): Linear(in_features=4096, out_features=8, bias=False)\r\n )\r\n (lora_B): ModuleDict(\r\n (default): Linear(in_features=8, out_features=4096, bias=False)\r\n )\r\n (lora_embedding_A): ParameterDict()\r\n (lora_embedding_B): ParameterDict()\r\n )\r\n (o_proj): Linear(in_features=4096, out_features=4096, bias=False)\r\n (rotary_emb): LlamaRotaryEmbedding()\r\n )\r\n (mlp): LlamaMLP(\r\n (gate_proj): Linear(in_features=4096, out_features=11008, bias=False)\r\n (up_proj): Linear(in_features=4096, out_features=11008, bias=False)\r\n (down_proj): Linear(in_features=11008, out_features=4096, bias=False)\r\n (act_fn): SiLUActivation()\r\n )\r\n (input_layernorm): LlamaRMSNorm()\r\n (post_attention_layernorm): LlamaRMSNorm()\r\n )\r\n )\r\n (norm): LlamaRMSNorm()\r\n )\r\n (lm_head): Linear(in_features=4096, out_features=32000, bias=False)\r\n )\r\n )\r\n)\r\nStarting main loop\r\nUsing the `WANDB_DISABLED` environment variable is deprecated and will be removed in v5. Use the --report_to flag to control the integrations used for logging result (for instance --report_to none).\r\n/home/sourab/DHS-LLM-Workshop/personal_copilot/training/delete-me-llama7b-personal-copilot-A100-80GB is already a clone of https://huggingface.co/smangrul/delete-me-llama7b-personal-copilot-A100-80GB. Make sure you pull the latest changes with `repo.git_pull()`.\r\nTraining...\r\n 0%| | 0/2000 [00:00<?, ?it/s]/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/utils/checkpoint.py:426: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.\r\n warnings.warn(\r\n{'loss': 1.5655, 'learning_rate': 4.166666666666667e-05, 'epoch': 0.01} \r\n{'loss': 1.4344, 'learning_rate': 4.998728546500082e-05, 'epoch': 0.03} \r\n 3%|██▏ | 51/2000 [02:14<1:25:06, 2.62s/it]\r\n```\r\n\r\n\r\nFor further decrease in VRAM and increasing speed, use Flash Attention V2. FOr further decrease in VRAM of model, use half-precision/8-bit quantization/4-bit quantization. \r\n\r\nHope this helps.\r\n",
"Hi @pacman100 , thanks a lot for taking the time to reproduce and provide such detailed explaination on memory consumption. While I fully agree on your examples and analysis, I am afraid there are some subtle differences on our setup that might make us miss the real bug here:\r\nMy footprint of 51GB of memory usage is not caused by a full precision loading of a 7B model, it is the result of switching load_in_8bits in the line https://github.com/tloen/alpaca-lora/blob/main/finetune.py#L114. This makes the transformers load in FP16 precision and takes around 14~15 GB as expected. However, when the training starts, the memory consumption jumps to 51 GB. All I have done is to switch the flag load_in_8bits and the memory usage changes so dramatically that can not be explained by the formula that works in normal cases.\r\n\r\nI think the memory leak did happen and can be partially supported by the gc and clear cache operations @younesbelkada and @zhuole1025 suggested above. Since you already have the llama model downloaded, if I am not asking too much from you, can you please checkout the lora-alpaca repo and run the scripst with load_in_8bit on and off ? I am worried that we missed a really important bug here. \r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I have been having similar issues.",
"Please refer to https://github.com/dvlab-research/LongLoRA/blob/be3bb473b4ca2c291c1d26419f6e409d774dd422/supervised-fine-tune.py#L314-L316.\r\n\r\nI experienced exactly the same issue, and resolve it by just adding one line `model.gradient_checkpointing_enable()` on 8bit turned off.\r\n"
] | 1,692 | 1,701 | 1,695 |
NONE
| null |
### System Info
Issues persisted across several peft version 0.3, 0.4, 0.5(dev)
Accelerator I used are 0.21.0,
Pytorch version I tried are 1.13.0 and 2.0 and both experience same memory explosion
transformers == 4.31.0
### Who can help?
@ArthurZucker @younesbelkada @pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am using the llama2 model with the alpaca project and what I found is that by simply switching the load_in_8bit flag on and off(I promise the other variables are kept untouched as the original script), the RAM on cuda jumps from 13GB to 51GBs. https://github.com/tloen/alpaca-lora/blob/main/finetune.py#L114
I think even loading the full model without any half precision should only takes around 30GBs and the tunable params are ~20 millions, the increase of memory usage does not makes sense to me .
To reproduce, please check out the repo and pip install the requirement file as I did today, and run the finetune script with the default datasets and llama2 model.
The memory usage between the 8bits flag on and off looks like below:


### Expected behavior
I think the expected behavior should use much less memory tho. Any insight will be appreciated!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25572/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25572/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25571
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25571/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25571/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25571/events
|
https://github.com/huggingface/transformers/pull/25571
| 1,855,190,288 |
PR_kwDOCUB6oc5YKwpi
| 25,571 |
Replaces calls to `.cuda` with `.to(torch_device)` in tests
|
{
"login": "vvvm23",
"id": 44398246,
"node_id": "MDQ6VXNlcjQ0Mzk4MjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/44398246?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vvvm23",
"html_url": "https://github.com/vvvm23",
"followers_url": "https://api.github.com/users/vvvm23/followers",
"following_url": "https://api.github.com/users/vvvm23/following{/other_user}",
"gists_url": "https://api.github.com/users/vvvm23/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vvvm23/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vvvm23/subscriptions",
"organizations_url": "https://api.github.com/users/vvvm23/orgs",
"repos_url": "https://api.github.com/users/vvvm23/repos",
"events_url": "https://api.github.com/users/vvvm23/events{/privacy}",
"received_events_url": "https://api.github.com/users/vvvm23/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Isn't this the case for a lot of other tests? They use the decorator `@require_torch_gpu` to skip the test if `torch_device != cuda`, so the tests will still only be run on GPU as they are meant to be.",
"I added `torch_device` into jukebox as I didn't see why it shouldn't be there if it is in basically every other test. If there is some extra behaviour or meaning I am missing, it would be helpful to know. In any case, the modifications to Jukebox are limited to two tests: one of which is skipped entirely anyway and the other is `test_fp16_slow_sampling` which I noticed does not have the `require_torch_gpu` decorator – only the `slow` decorator. I'll add one if you think it is worth it here 🙂 ",
"Nice suggestions, I misunderstood what you meant initially by splitting into two lines. Hope it is all good now~",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25571). All of your documentation changes will be reflected on that endpoint.",
"This should be good now 👍 "
] | 1,692 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
`torch.Tensor.cuda()` is a pre-0.4 solution to changing a tensor's device. It is recommended to prefer `.to(...)` for greater flexibility and error handling. Furthermore, this makes it more consistent with other tests (that tend to use `.to(torch_device)`) and ensures the correct device backend is used (if `torch_device` is neither `cpu` or `cuda`).
This could be the case if `TRANSFORMERS_TEST_DEVICE` is not `cpu` or `cuda`. See #25506.
By default, I don't think this PR should change any test behaviour, but let me know if this is misguided.
# What does this PR do?
Replaces calls to `torch.Tensor.cuda()` with `.to(torch_device)` equivalents. This not only ensures consistency between different tests and their management of device, but also makes tests more flexible with regard to custom or less common PyTorch backends.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
This affects multiple tests an doesn't target any specific modality. However, they are all PyTorch models. @sgugger, hope you don't mind me tagging you again 🙂
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25571/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25571/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25571",
"html_url": "https://github.com/huggingface/transformers/pull/25571",
"diff_url": "https://github.com/huggingface/transformers/pull/25571.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25571.patch",
"merged_at": 1692355240000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25570
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25570/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25570/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25570/events
|
https://github.com/huggingface/transformers/pull/25570
| 1,855,125,268 |
PR_kwDOCUB6oc5YKia6
| 25,570 |
Suggestions on Pipeline_webserver
|
{
"login": "kihoon71",
"id": 75935546,
"node_id": "MDQ6VXNlcjc1OTM1NTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/75935546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kihoon71",
"html_url": "https://github.com/kihoon71",
"followers_url": "https://api.github.com/users/kihoon71/followers",
"following_url": "https://api.github.com/users/kihoon71/following{/other_user}",
"gists_url": "https://api.github.com/users/kihoon71/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kihoon71/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kihoon71/subscriptions",
"organizations_url": "https://api.github.com/users/kihoon71/orgs",
"repos_url": "https://api.github.com/users/kihoon71/repos",
"events_url": "https://api.github.com/users/kihoon71/events{/privacy}",
"received_events_url": "https://api.github.com/users/kihoon71/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25570). All of your documentation changes will be reflected on that endpoint."
] | 1,692 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
docs: reorder the warning tip for pseudo-code
# What does this PR do?
Fixed #25569, we modified the docs a little bit to clarify the suggested code is intentionally written.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Narsil
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25570/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25570/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25570",
"html_url": "https://github.com/huggingface/transformers/pull/25570",
"diff_url": "https://github.com/huggingface/transformers/pull/25570.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25570.patch",
"merged_at": 1692346664000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25569
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25569/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25569/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25569/events
|
https://github.com/huggingface/transformers/issues/25569
| 1,855,051,932 |
I_kwDOCUB6oc5ukdic
| 25,569 |
[docs] Improve dynamic batching example in pipeline_webserver.md
|
{
"login": "kihoon71",
"id": 75935546,
"node_id": "MDQ6VXNlcjc1OTM1NTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/75935546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kihoon71",
"html_url": "https://github.com/kihoon71",
"followers_url": "https://api.github.com/users/kihoon71/followers",
"following_url": "https://api.github.com/users/kihoon71/following{/other_user}",
"gists_url": "https://api.github.com/users/kihoon71/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kihoon71/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kihoon71/subscriptions",
"organizations_url": "https://api.github.com/users/kihoon71/orgs",
"repos_url": "https://api.github.com/users/kihoon71/repos",
"events_url": "https://api.github.com/users/kihoon71/events{/privacy}",
"received_events_url": "https://api.github.com/users/kihoon71/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @Narsil ",
"> Although it's made clear in the documentation that the code is optimized for readability and not efficiency, users might inadvertently use it as-is without considering the potential problems.\r\n\r\nThis is pretty much much why it is not functional code. It is intentional. You CANNOT copy mindlessly.\r\nFixing it will require a little bit of thinking, most likely to keep on reading and seeing the warnings.\r\n\r\nI'm ok with moving around the warning/making it more prominent. As for making the code copy/pastable I'm not too sure we should."
] | 1,692 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
Hello @stevhliu,
I recently went through the dynamic batching section of [Using pipelines for a webserver](https://huggingface.co/docs/transformers/pipeline_webserver), and while I appreciate the clarity and details provided, I believe there are some potential pitfalls with the given code that might mislead less experienced users.
### Issue:
The example provided seems like pseudo code so it does not work on trial.
the example code is below:
```python
(string, rq) = await q.get()
strings = []
queues = []
while True:
try:
(string, rq) = await asyncio.wait_for(q.get(), timeout=0.001) # 1ms
except asyncio.exceptions.TimeoutError:
break
strings.append(string)
queues.append(rq)
strings
outs = pipe(strings, batch_size=len(strings))
for rq, out in zip(queues, outs):
await rq.put(out)
```
### Suggestion:
Although it's made clear in the documentation that the code is optimized for readability and not efficiency, users might inadvertently use it as-is without considering the potential problems.
I would recommend updating the documentation with two suggestion.
First, i propose you to place the warning block above the code block and add explanation that the code is pseudo code.
Second one is to make the code at least working. so my suggestion for that is below :
Proposed Working Code:
```python
(string, rq) = await q.get()
strings = []
queues = []
while True:
try:
(string, rq) = await asyncio.wait_for(q.get(), timeout=0.001) # 1ms
strings.append(string)
queues.append(rq)
except asyncio.exceptions.TimeoutError:
strings.append(string)
queues.append(rq)
break
outs = pipe(strings, batch_size=len(strings))
if len(queues) == 1:
await queues[0].put(outs)
else:
for rq, out in zip(queues, outs):
await rq.put(out)
```
This way, we can find out the result at least on a trial and If you have better suggestion, please let me know!
Thank you for considering this feedback.
Best regards,
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25569/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25569/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25568
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25568/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25568/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25568/events
|
https://github.com/huggingface/transformers/pull/25568
| 1,855,005,649 |
PR_kwDOCUB6oc5YKIJ1
| 25,568 |
fix Vivit for video classification example
|
{
"login": "Geometrein",
"id": 65066173,
"node_id": "MDQ6VXNlcjY1MDY2MTcz",
"avatar_url": "https://avatars.githubusercontent.com/u/65066173?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Geometrein",
"html_url": "https://github.com/Geometrein",
"followers_url": "https://api.github.com/users/Geometrein/followers",
"following_url": "https://api.github.com/users/Geometrein/following{/other_user}",
"gists_url": "https://api.github.com/users/Geometrein/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Geometrein/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Geometrein/subscriptions",
"organizations_url": "https://api.github.com/users/Geometrein/orgs",
"repos_url": "https://api.github.com/users/Geometrein/repos",
"events_url": "https://api.github.com/users/Geometrein/events{/privacy}",
"received_events_url": "https://api.github.com/users/Geometrein/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @amyeroberts ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25568). All of your documentation changes will be reflected on that endpoint."
] | 1,692 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
# What does this PR do?
Both Vivit for video classification example implementations were not working.
Some of the issues included:
- Missing or unnecessary library Imports
- Multiple variables referenced before assignment.
- Missing docstring for `sample_frame_indices()` method
- Logical error in `sample_frame_indices()` method that would cause Numpy ValueError.
- Missing or unnecessary lines of code.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25568/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25568/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25568",
"html_url": "https://github.com/huggingface/transformers/pull/25568",
"diff_url": "https://github.com/huggingface/transformers/pull/25568.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25568.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25566
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25566/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25566/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25566/events
|
https://github.com/huggingface/transformers/pull/25566
| 1,854,952,359 |
PR_kwDOCUB6oc5YJ8a5
| 25,566 |
Skip `test_beam_search_xla_generate_simple` for `T5`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25566). All of your documentation changes will be reflected on that endpoint."
] | 1,692 | 1,692 | 1,692 |
COLLABORATOR
| null |
# What does this PR do?
TF 2.13 breaks this test on GPU 😢 😭
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25566/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25566/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25566",
"html_url": "https://github.com/huggingface/transformers/pull/25566",
"diff_url": "https://github.com/huggingface/transformers/pull/25566.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25566.patch",
"merged_at": 1692279046000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25565
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25565/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25565/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25565/events
|
https://github.com/huggingface/transformers/pull/25565
| 1,854,950,374 |
PR_kwDOCUB6oc5YJ7-S
| 25,565 |
[`Llama`] remove prompt and fix prefix finetuning
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @sourabh112 as I am not an expert in prefix tuning, there was something reported in the llama repo: https://github.com/facebookresearch/llama-recipes/issues/47#issuecomment-1674492457"
] | 1,692 | 1,692 | 1,692 |
COLLABORATOR
| null |
# What does this PR do?
We are waiting for #25323 to be merged, but following the llama update we are going to remove the default system prompt.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25565/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25565/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25565",
"html_url": "https://github.com/huggingface/transformers/pull/25565",
"diff_url": "https://github.com/huggingface/transformers/pull/25565.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25565.patch",
"merged_at": 1692358763000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25564
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25564/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25564/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25564/events
|
https://github.com/huggingface/transformers/pull/25564
| 1,854,787,277 |
PR_kwDOCUB6oc5YJX_a
| 25,564 |
[`Tests`] Fix failing 8bit test
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @younesbelkada ! Could you tell a bit more why MPT needs `einops`?",
"Also you probably need refresh your CircleCI token?",
"for MPT as you can see from the remote model, einops is used in files that are imported in `modeling_mpt.py` such as `attention.py` here: https://huggingface.co/mosaicml/mpt-7b/blob/main/attention.py#L7 it seems to be a strong dependency (hence the error on the slow test attached)\r\nIt is also listed as a dependency in the requirements file together with their custom fork of triton: https://huggingface.co/mosaicml/mpt-7b/blob/main/requirements.txt#L2 but the triton dependency is optional",
"OK, I think I got confused by the fact we have a modeling file and we also use the same checkpoint name in our model tests. Now I get it, but a question: why we want to use the remote code rather than the code in `transformers` for this quantization test ..? Is it really necessary to use the remote code the quantization?",
"Thanks! \r\nRegarding your question, yes I think so, #25105 added the support for correct quantization for most remote code models, therefore it is crucial to test the remote code model + non remote model in the test "
] | 1,692 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes two failing tests in https://github.com/huggingface/transformers/actions/runs/5873964880/job/15928072870
- `tests/quantization/bnb/test_mixed_int8.py::MixedInt8Test::test_get_keys_to_not_convert` & `tests/quantization/bnb/test_mixed_int8.py::MixedInt8GPT2Test::test_get_keys_to_not_convert`
Context: https://github.com/huggingface/transformers/pull/25105 added stronger checks to enable the correct quantization of models on the Hub. [Therefore it added a test that checks if `mpt-7b` is correctly quantized](https://github.com/huggingface/transformers/blob/main/tests/quantization/bnb/test_mixed_int8.py#L141). Since that model requires einops to be added as a dependency I propose to simply add einops in the docker image
cc @ydshieh
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25564/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25564/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25564",
"html_url": "https://github.com/huggingface/transformers/pull/25564",
"diff_url": "https://github.com/huggingface/transformers/pull/25564.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25564.patch",
"merged_at": 1692286465000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25563
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25563/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25563/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25563/events
|
https://github.com/huggingface/transformers/pull/25563
| 1,854,728,037 |
PR_kwDOCUB6oc5YJK3T
| 25,563 |
[`TokenizerFast`] Fix setting prefix space in __init__
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,692 | 1,692 | 1,692 |
COLLABORATOR
| null |
# What does this PR do?
Fixes #24846. For Sequence tokenizer, the previous solution was never doing anything.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25563/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25563/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25563",
"html_url": "https://github.com/huggingface/transformers/pull/25563",
"diff_url": "https://github.com/huggingface/transformers/pull/25563.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25563.patch",
"merged_at": 1692374991000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25562
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25562/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25562/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25562/events
|
https://github.com/huggingface/transformers/pull/25562
| 1,854,678,301 |
PR_kwDOCUB6oc5YI_5D
| 25,562 |
Update `test_beam_search_xla_generate_simple` for T5
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,692 | 1,692 | 1,692 |
COLLABORATOR
| null |
# What does this PR do?
The output changed after we use TF 2.13 for TF-T5 model.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25562/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25562/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25562",
"html_url": "https://github.com/huggingface/transformers/pull/25562",
"diff_url": "https://github.com/huggingface/transformers/pull/25562.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25562.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25561
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25561/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25561/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25561/events
|
https://github.com/huggingface/transformers/pull/25561
| 1,854,653,722 |
PR_kwDOCUB6oc5YI6bi
| 25,561 |
[`Docs`] Fix un-rendered images
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,692 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/25428 and other issues with images on the documentation and this repo that are not rendered properly.
The fix is to simply replace `https://s3.amazonaws.com/moonup` by `https://cdn-uploads.huggingface.co`
cc @MKhalusova @stevhliu
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25561/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25561/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25561",
"html_url": "https://github.com/huggingface/transformers/pull/25561",
"diff_url": "https://github.com/huggingface/transformers/pull/25561.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25561.patch",
"merged_at": 1692266892000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25560
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25560/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25560/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25560/events
|
https://github.com/huggingface/transformers/pull/25560
| 1,854,540,536 |
PR_kwDOCUB6oc5YIhga
| 25,560 |
Skip `test_onnx_runtime_optimize` for now
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,692 | 1,692 | 1,692 |
COLLABORATOR
| null |
# What does this PR do?
`tf2onnx` has issue with `TF 2.13`, so let's skip them for now.
Note we have delegated the (pytorch) ONNX tests to Optimum, see [here](https://github.com/huggingface/transformers/pull/24800#issuecomment-1634822781), and it's like we will do this for the TF ONNX tests too (once @michaelbenayoun confirms this).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25560/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25560/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25560",
"html_url": "https://github.com/huggingface/transformers/pull/25560",
"diff_url": "https://github.com/huggingface/transformers/pull/25560.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25560.patch",
"merged_at": 1692264196000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25559
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25559/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25559/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25559/events
|
https://github.com/huggingface/transformers/pull/25559
| 1,854,519,784 |
PR_kwDOCUB6oc5YIdEO
| 25,559 |
YOLOS - reset default return_pixel_mask value
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ydshieh Because #25464 was actually an older branch, and it's such large PR it's hard to spot the change. I broke up some of the work in #25464 to make it less unwieldy e.g. adding copied froms etc, and the changes these introduced where resolved later in separate. I think I then must have messed up a rebase along the line 😅. "
] | 1,692 | 1,692 | 1,692 |
COLLABORATOR
| null |
# What does this PR do?
Removes the copied from statement and the changing of the default value of `return_pixel_mask` that got set back in the merging of #25464
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25559/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25559/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25559",
"html_url": "https://github.com/huggingface/transformers/pull/25559",
"diff_url": "https://github.com/huggingface/transformers/pull/25559.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25559.patch",
"merged_at": 1692262119000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25558
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25558/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25558/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25558/events
|
https://github.com/huggingface/transformers/pull/25558
| 1,854,508,177 |
PR_kwDOCUB6oc5YIakN
| 25,558 |
Add TensorFlow implementation of ConvNeXTv2
|
{
"login": "neggles",
"id": 4232981,
"node_id": "MDQ6VXNlcjQyMzI5ODE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4232981?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neggles",
"html_url": "https://github.com/neggles",
"followers_url": "https://api.github.com/users/neggles/followers",
"following_url": "https://api.github.com/users/neggles/following{/other_user}",
"gists_url": "https://api.github.com/users/neggles/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neggles/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neggles/subscriptions",
"organizations_url": "https://api.github.com/users/neggles/orgs",
"repos_url": "https://api.github.com/users/neggles/repos",
"events_url": "https://api.github.com/users/neggles/events{/privacy}",
"received_events_url": "https://api.github.com/users/neggles/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @amyeroberts ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25558). All of your documentation changes will be reflected on that endpoint.",
"OK, pushed some updates! I *think* I've addressed everything.\r\n\r\nTo get the nice docstring fixes and type annotation changes I had to modify the ConvNeXT v1 model files (since `make fix-copies` overwrites the comment-annotated classes otherwise), so I broke those changes out into a separate commit just for clarity (happy to squash them back together if you prefer).\r\n\r\nOf note, I'm getting some _weird_ results when I run the `transformers-cli pt-to-tf` command. initially it was just caused by the PyTorch model running on CPU and TensorFlow \"helpfully\" automatically running on GPU, but even forcibly hiding my CUDA devices the outputs seem to be a little out of wack:\r\n```\r\n$ CUDA_VISIBLE_DEVICES=\"\" NVIDIA_TF32_OVERRIDE=0 transformers-cli pt-to-tf --no-pr \\\r\n --model-name facebook/convnextv2-tiny-1k-224 --local-dir temp/convnextv2-tiny-1k-224\r\n# [...]\r\nValueError: The cross-loaded TensorFlow model has different outputs, something went wrong!\r\n\r\nList of maximum output differences above the threshold (5e-05):\r\n\r\n\r\nList of maximum hidden layer differences above the threshold (5e-05):\r\nhidden_states[2]: 6.256e-04\r\nhidden_states[3]: 2.838e-03\r\nhidden_states[4]: 1.793e-04\r\n```\r\n\r\nSo only hidden states are out of spec, not actual logits.\r\n\r\nThe strange thing is, if I run it against ConvNeXT v1, I get a similar result:\r\n\r\n```sh\r\n$ CUDA_VISIBLE_DEVICES=\"\" NVIDIA_TF32_OVERRIDE=0 transformers-cli pt-to-tf --no-pr \\\r\n --model-name facebook/convnext-tiny-224 --local-dir temp/convnext-tiny-224\r\n# [...]\r\nValueError: The cross-loaded TensorFlow model has different outputs, something went wrong!\r\n\r\nList of maximum output differences above the threshold (5e-05):\r\n\r\n\r\nList of maximum hidden layer differences above the threshold (5e-05):\r\nhidden_states[1]: 1.583e-04\r\nhidden_states[2]: 1.480e-03\r\nhidden_states[3]: 2.380e-03\r\nhidden_states[4]: 1.595e-04\r\n```\r\n\r\nThe hidden states are actually *more* out of range for the v1 model. Not sure what's going on there; I tried loading the model with fewer stages (as suggested in the earlier PR) but it runs into layer input/output shape issues and fails to load, and attempting to inspect layers via `breakpoint()` makes tensorflow throw a hissyfit and crash the entire debugger server 😅 pain.\r\n\r\nThe `atol` for logits in the PyTorch test script is only 1e-4, so maybe this is just Expected:tm: for these models?",
"@neggles For uploading the checkpoints, as long as the output logits have a small difference, you can override the checks. Let us know when you've pushed the TF checkpoints to the hub, or if you need help/permission to do so and then we can merge. ",
"@amyeroberts thanks for the help/feedback! TensorFlow may be a little old hat these days but, well, Cloud TPUs 🤷\r\n\r\nOK, cool, I figured that was probably the case. I'm seeing about 1.75e-5 difference in output logits which seems reasonable enough to me; ~~I suspect most of the difference comes down to LayerNorm epsilon settings, there's a bit of variation there depending on who created the model (e.g. [the WD Tagger ConvNeXTv2 model](https://huggingface.co/SmilingWolf/wd-v1-4-convnextv2-tagger-v2) uses the default TF layernorm eps of 1e-3 for everything), but Transformers sets all of them to 1e-6 except for the final one (set by config). That's a *tiny* bit out of scope for this PR, though 😆~~ [edit: see below 😅 ]\r\n\r\nAnyway! Have opened PRs for conversion on the smaller models:\r\n\r\n[facebook/convnextv2-atto-1k-224](https://huggingface.co/facebook/convnextv2-atto-1k-224/discussions/3)\r\n[facebook/convnextv2-femto-1k-224](https://huggingface.co/facebook/convnextv2-femto-1k-224/discussions/1)\r\n[facebook/convnextv2-pico-1k-224](https://huggingface.co/facebook/convnextv2-pico-1k-224/discussions/2)\r\n[facebook/convnextv2-nano-1k-224](https://huggingface.co/facebook/convnextv2-nano-1k-224/discussions/3)\r\n[facebook/convnextv2-nano-22k-224](https://huggingface.co/facebook/convnextv2-nano-22k-224/discussions/2)\r\n[facebook/convnextv2-nano-22k-384](https://huggingface.co/facebook/convnextv2-nano-22k-384/discussions/4)\r\n[facebook/convnextv2-tiny-1k-224](https://huggingface.co/facebook/convnextv2-tiny-1k-224/discussions/2)\r\n[facebook/convnextv2-tiny-22k-224](https://huggingface.co/facebook/convnextv2-tiny-22k-224/discussions/2)\r\n[facebook/convnextv2-tiny-22k-384](https://huggingface.co/facebook/convnextv2-tiny-22k-384/discussions/3)\r\n\r\n~~Something looks a bit screwy with the smaller models? The output differences are pretty big, e.g. 3.719e-05 for atto-1k-224, and nano-1k-224 is at 2.9e-5; on that one we have a comparison point from [the last conversion attempt](https://huggingface.co/facebook/convnextv2-nano-1k-224/discussions/1), where output difference is 1.371e-06 which is pretty major. Hmm.~~ [edit: Fixed, everything's within ~1.5e5 (usually less) now]",
"Found it 🤦 missed a pair of `()` in the GRN calc. With that fixed, atto is down to `1.001e-05` 😅 will go add PR comments with corrected values.",
"I see most of the weight PRs have been merged, yay! I'm not sure what happened with the CI pipelines, though - looks like an unrelated error, but I don't have permissions to hit the rerun button 😢\r\n\r\n@amyeroberts should I rebase this and push to make CI re-run?",
"Sure rebasing is always a good thing’ Amy is out for a while, ping me whenever ",
"Ok, the tf weights are missing on the HUB I asked for a merge of your PRs! 😉 ",
"TF weights are in!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Oops, forgot about this! Yes, TF weights are all in now - thanks much! - lemme resolve that conflict...",
"OK! Rebased one more time and did one other thing; I've dropped the `add_pooling_layer` argument from `TFConvNeXTV2Model` and `TFConvNeXTV2MainStage` since `utils/check_docstrings.py` didn't like that it's present-but-undocumented, and it's not present in the PyTorch version of the code either.\r\n\r\nShould be good to merge now, I think? Sorry for the delay this end! Been a busy few weeks.\r\n\r\n(ping @ArthurZucker 😄)",
"Sure! I'll let @Rocketknight1 handle this! ",
"Hey @neggles, this looks good, but needs `make fix-copies` to get the tests passing. Try running that in the root of the repo and then committing/pushing!",
"@Rocketknight1 So this is... weird. When i run `make repo-consistency` on this end, I get it complaining about `TFRegNetModel` / `TFRegNetForImageClassification`:\r\n```\r\nTraceback (most recent call last):\r\n File \"/tank/ml/huggingface/transformers/utils/check_docstrings.py\", line 1232, in <module>\r\n check_docstrings(overwrite=args.fix_and_overwrite)\r\n File \"/tank/ml/huggingface/transformers/utils/check_docstrings.py\", line 1224, in check_docstrings\r\n raise ValueError(error_message)\r\nValueError: There was at least one problem when checking docstrings of public objects.\r\nThe following objects docstrings do not match their signature. Run `make fix-copies` to fix this.\r\n- TFRegNetForImageClassification\r\n- TFRegNetModel\r\nmake: *** [Makefile:46: repo-consistency] Error 1\r\n```\r\n\r\nWhile CircleCI shows it [complaining about these new models](https://app.circleci.com/pipelines/github/huggingface/transformers/76000/workflows/353067b2-e8ae-42ff-bb0b-718ec453aff2/jobs/964741) despite the fact that - as far as I can tell - the docstrings *do* match for both `TFConvNextV2Model` and `TFRegNetModel` 🤔. Running `make fix-copies` results in no action:\r\n\r\n```\r\naholmes@hyperion:/tank/ml/huggingface/transformers ❯ make fix-copies\r\npython utils/check_copies.py --fix_and_overwrite\r\npython utils/check_table.py --fix_and_overwrite\r\npython utils/check_dummies.py --fix_and_overwrite\r\npython utils/check_doctest_list.py --fix_and_overwrite\r\npython utils/check_task_guides.py --fix_and_overwrite\r\npython utils/check_docstrings.py --fix_and_overwrite\r\nUsing /home/aholmes/.cache/torch_extensions/py310_cu118 as PyTorch extensions root...\r\nDetected CUDA files, patching ldflags\r\nEmitting ninja build file /home/aholmes/.cache/torch_extensions/py310_cu118/cuda_kernel/build.ninja...\r\nBuilding extension module cuda_kernel...\r\nAllowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)\r\nninja: no work to do.\r\nLoading extension module cuda_kernel...\r\naholmes@hyperion:/tank/ml/huggingface/transformers ❯ \r\n```\r\n\r\nMaybe something screwy with TensorFlow model init signature parsing? If i edit `utils/check_docstrings.py` like so, to make it print what it thinks the docstrings should be:\r\n\r\n```diff\r\n--- a/utils/check_docstrings.py\r\n+++ b/utils/check_docstrings.py\r\n@@ -1195,7 +1195,7 @@ def check_docstrings(overwrite: bool = False):\r\n if overwrite:\r\n fix_docstring(obj, old_doc, new_doc)\r\n else:\r\n- failures.append(name)\r\n+ failures.append(name + \"\\n Corrected docstring:\\n \" + new_doc)\r\n elif not overwrite and new_doc is not None and (\"<fill_type>\" in new_doc or \"<fill_docstring>\" in new_doc):\r\n to_clean.append(name)\r\n```\r\n\r\nI get this output:\r\n```\r\nValueError: There was at least one problem when checking docstrings of public objects.\r\nThe following objects docstrings do not match their signature. Run `make fix-copies` to fix this.\r\n- TFRegNetForImageClassification\r\n Corrected docstring:\r\n config (`RegNetConfig`): <fill_docstring>\r\n- TFRegNetModel\r\n Corrected docstring:\r\n config (`RegNetConfig`): <fill_docstring>\r\n```\r\n\r\nWhich is the same as what's currently in there, but without the `[]` to turn it into a link. Is that not necessary anymore? From recent commit history I suspect not, but I'm not sure.\r\n\r\nI also note that both `TFConvNextModel` and `TFConvNextForImageClassification` are listed as exceptions in `check_docstrings.py` (as is ConvNextV2Model)... Not entirely sure what to do here. Nevertheless, I've rebased and pushed again so who knows, maybe it'll pass this time!",
"Ah, ugh, this might indeed indicate an issue with the docstring parsing for that file, I didn't realize `ConvNext` was one of the exceptions! If this current run fails a repo-consistency check, then I would just add the `TFConvNextV2` model classes to the exceptions list in `check_docstrings.py`, and maybe leave a comment that they're equivalent to the PT docstrings, so whenever we get around to properly fixing those we should be able to fix the TF ones too.",
"@Rocketknight1 OK, no problem - have added exclusions and comment in `check_docstrings.py` (let me know if the comment should be in the modeling_tf_convnextv2.py file instead), rebased again, etc.\r\n\r\nC'mon, tests! please? 🤞",
"Yay, tests passed!",
"Done and done! I've elected *not* to rebase again just to reduce the likelihood of tests getting angry 😅",
"This looks good now and I'm happy to merge! cc @amyeroberts - you don't need to do another review, but let me know if there's anything you think is unresolved before we merge this.",
"Did a quick scan over that changes. All looks good to me. Thanks again @neggles for adding this model and for such a clean PR! "
] | 1,692 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
# What does this PR do?
This adds TensorFlow support for ConvNeXTV2, following the pattern of the existing PyTorch ConvNeXTV2 implementation and the existing ConvNeXT(V1) TensorFlow implementation.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
Tests are in, `make fixup` and `make quality` are happy, and `NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 py.test -vv tests/models/convnextv2/test_modeling_tf_convnextv2.py` passes everything except for `from_pretrained` (unsurprisingly, `"facebook/convnextv2-tiny-1k-224"` lacks TensorFlow weights, but `from_pt=True` works swimmingly)
Getting this one to pass tests was a little tricky, the outputs from the model were quite variable run-to-run. Still not entirely sure exactly what I did to fix it, but it *looks* like TensorFlow doesn't like it when you do this:
```py
x = input + self.drop_path(x, training=training)
```
and wants you to do *this* instead:
```py
x = self.drop_path(x, training=training)
x = input + x
```
🤷 who knows what cursed machinations the XLA compiler gets up to while nobody's looking.
There was a prior (seemingly abandoned) port attempt in #23155 which I referenced a little while building this; just to address one of the review comments on that PR, `config.layer_norm_eps` *only* applies to the `TFConvNextV2MainLayer.layernorm` layer, *not* the other norm layers or the GRN layer, which are fixed at `1e-6` epsilon (see the existing PyTorch implementation & original code). Using the (typically `1e-12`) value from `config.layer_norm_eps` in those layers will produce aggressively incorrect outputs 😭
Based on the PR template, it looks like I should tag @amyeroberts and maybe @alaradirik (majority holder of `git blame` for the PyTorch implementation)?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25558/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25558/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25558",
"html_url": "https://github.com/huggingface/transformers/pull/25558",
"diff_url": "https://github.com/huggingface/transformers/pull/25558.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25558.patch",
"merged_at": 1698851395000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25557
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25557/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25557/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25557/events
|
https://github.com/huggingface/transformers/pull/25557
| 1,854,313,967 |
PR_kwDOCUB6oc5YHwTf
| 25,557 |
Add type hints for several pytorch models (batch-2)
|
{
"login": "nablabits",
"id": 33068707,
"node_id": "MDQ6VXNlcjMzMDY4NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/33068707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nablabits",
"html_url": "https://github.com/nablabits",
"followers_url": "https://api.github.com/users/nablabits/followers",
"following_url": "https://api.github.com/users/nablabits/following{/other_user}",
"gists_url": "https://api.github.com/users/nablabits/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nablabits/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nablabits/subscriptions",
"organizations_url": "https://api.github.com/users/nablabits/orgs",
"repos_url": "https://api.github.com/users/nablabits/repos",
"events_url": "https://api.github.com/users/nablabits/events{/privacy}",
"received_events_url": "https://api.github.com/users/nablabits/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25557). All of your documentation changes will be reflected on that endpoint.",
"This looks good to me now! Ping me whenever you're finished and you want me to merge it.",
"> This looks good to me now! Ping me whenever you're finished and you want me to merge it.\r\n\r\nMatt, this is fine by me so I'm happy for you to merge it\r\n\r\n@Rocketknight1 "
] | 1,692 | 1,694 | 1,693 |
CONTRIBUTOR
| null |
# What does this PR do?
Addresses some of the models in https://github.com/huggingface/transformers/issues/16059
## Who can review?
@Rocketknight1, please
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25557/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25557/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25557",
"html_url": "https://github.com/huggingface/transformers/pull/25557",
"diff_url": "https://github.com/huggingface/transformers/pull/25557.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25557.patch",
"merged_at": 1693227503000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25556
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25556/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25556/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25556/events
|
https://github.com/huggingface/transformers/issues/25556
| 1,854,100,984 |
I_kwDOCUB6oc5ug1X4
| 25,556 |
ModuleNotFoundError: No module named 'transformers'
|
{
"login": "JojoTacoTheCat77",
"id": 97198215,
"node_id": "U_kgDOBcsghw",
"avatar_url": "https://avatars.githubusercontent.com/u/97198215?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JojoTacoTheCat77",
"html_url": "https://github.com/JojoTacoTheCat77",
"followers_url": "https://api.github.com/users/JojoTacoTheCat77/followers",
"following_url": "https://api.github.com/users/JojoTacoTheCat77/following{/other_user}",
"gists_url": "https://api.github.com/users/JojoTacoTheCat77/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JojoTacoTheCat77/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JojoTacoTheCat77/subscriptions",
"organizations_url": "https://api.github.com/users/JojoTacoTheCat77/orgs",
"repos_url": "https://api.github.com/users/JojoTacoTheCat77/repos",
"events_url": "https://api.github.com/users/JojoTacoTheCat77/events{/privacy}",
"received_events_url": "https://api.github.com/users/JojoTacoTheCat77/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"It is a GPT-4 answerable question and it is related to the issue of using Jupyter Notebook instead of transformers library. (Please check your Python interpreter)",
"You most probably have a discrepency between the python kernel used by the notebook. Nothing we can do on our end"
] | 1,692 | 1,692 | 1,692 |
NONE
| null |
Hi all,
I have installed the transformer package into my virtual env and launched jn from that same virtual env.
Also, I can see the package installed from running pip freeze. Somehow, I keep getting the ModuleNotFoundError.

Here are all the packages in my virtual env.
`certifi==2023.7.22
charset-normalizer==3.2.0
filelock==3.12.2
fsspec==2023.6.0
huggingface-hub==0.16.4
idna==3.4
Jinja2==3.1.2
MarkupSafe==2.1.3
mpmath==1.3.0
networkx==3.1
numpy==1.25.2
packaging==23.1
PyYAML==6.0.1
regex==2023.8.8
requests==2.31.0
safetensors==0.3.2
sympy==1.12
tokenizers==0.13.3
torch==2.0.1
tqdm==4.66.1
transformers==4.31.0
typing_extensions==4.7.1
urllib3==2.0.4
`
Help please!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25556/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25556/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25555
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25555/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25555/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25555/events
|
https://github.com/huggingface/transformers/issues/25555
| 1,854,026,591 |
I_kwDOCUB6oc5ugjNf
| 25,555 |
PreTrainedModel._load_pretrained_model doesn't move non-persistent buffers to cpu with `low_cpu_mem_usage`
|
{
"login": "shingjan",
"id": 11846349,
"node_id": "MDQ6VXNlcjExODQ2MzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11846349?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shingjan",
"html_url": "https://github.com/shingjan",
"followers_url": "https://api.github.com/users/shingjan/followers",
"following_url": "https://api.github.com/users/shingjan/following{/other_user}",
"gists_url": "https://api.github.com/users/shingjan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shingjan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shingjan/subscriptions",
"organizations_url": "https://api.github.com/users/shingjan/orgs",
"repos_url": "https://api.github.com/users/shingjan/repos",
"events_url": "https://api.github.com/users/shingjan/events{/privacy}",
"received_events_url": "https://api.github.com/users/shingjan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"As I said before, this is not something the Transformers library supports, so I'm not surprised this doesn't work. This tensor `inv_freq` is created at init in the model and doesn't have a key in the state dict since it is a non-persistent buffer so there is nothing we can do for this bug.",
"I was thinking that maybe we can put a fix to work around it. We can either:\r\n1. Ignore non-persistent buffer in `accelerate.utils.modeling.named_module_tensors`. However non-persistent buffers could be used for computation at inference. or\r\n2. Init meta tensor to empty in `accelerate.utils.modeling.set_module_tensor_to_device` if moving meta tensor to non-meta tensors. This carries the assumption that meta tensor initialization to empty tensor is correct. or\r\n3. Init non-persistent buffers in `PreTrainedModel._load_pretrained_model`. \r\n\r\nBy \r\n> This tensor inv_freq is created at init in the model and doesn't have a key in the state dict since it is a non-persistent buffer so there is nothing we can do for this bug.\r\n\r\nI take that you think no.3 is not a viable solution. If so how do the first two options sound to you? Thanks!",
"1. won't work as we need them as you stated\r\n2. won't work either since the tensor needs to have the value set at the init\r\n3. won't work either since we don't know to which value initialize that tensor\r\n\r\nIt's just not possible to fully initialize on the meta device models that have non-persistent buffers that are initialized at init since we lose that value and have no way to recover it. I'm afraid you will need to manually set back those tensors to the proper value.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,692 | 1,695 | 1,695 |
NONE
| null |
### System Info
With env variable `INIT_INCLUDE_BUFFERS=True` in [accelerate](https://github.com/huggingface/accelerate/pull/1852), big LLM models will be loaded in meta tensors first and use actual weights during inference. However, `PreTrainedModel._load_pretrained_model` doesn't init non-persistent buffer tensors with `low_cpu_mem_usage` and accelerate, leading to a `ValueError: inv_freq is on the meta device, we need a `value` to put in on 0.` while `accelerate` tries to add hook to module with `named_module_tensors`.
### Who can help?
cc: @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
My repro, after this [PR](https://github.com/huggingface/accelerate/pull/1852) from accelerate:
```
import torch
from transformers import (
AutoModelForCausalLM,
)
import os
os.environ["ACCELERATE_INIT_INCLUDE_BUFFERS"] = "1"
with torch.no_grad():
model_real = AutoModelForCausalLM.from_pretrained(
"lmsys/vicuna-7b-v1.3", device_map='auto', torch_dtype=torch.float16, trust_remote_code=True
)
```
stacktrace:
```
Traceback (most recent call last):
File "repro/vicuna.py", line 34, in <module>
model_real = AutoModelForCausalLM.from_pretrained(
File "/lib/transformers/src/transformers/models/auto/auto_factory.py", line 511, in from_pretrained
return model_class.from_pretrained(
File "/lib/transformers/src/transformers/modeling_utils.py", line 2996, in from_pretrained
dispatch_model(model, **kwargs)
File "/lib/accelerate/src/accelerate/big_modeling.py", line 385, in dispatch_model
attach_align_device_hook_on_blocks(
File "/lib/accelerate/src/accelerate/hooks.py", line 536, in attach_align_device_hook_on_blocks
attach_align_device_hook_on_blocks(
File "/lib/accelerate/src/accelerate/hooks.py", line 536, in attach_align_device_hook_on_blocks
attach_align_device_hook_on_blocks(
File "/lib/accelerate/src/accelerate/hooks.py", line 536, in attach_align_device_hook_on_blocks
attach_align_device_hook_on_blocks(
File "/lib/accelerate/src/accelerate/hooks.py", line 506, in attach_align_device_hook_on_blocks
add_hook_to_module(module, hook)
File "/lib/accelerate/src/accelerate/hooks.py", line 155, in add_hook_to_module
module = hook.init_hook(module)
File "/lib/accelerate/src/accelerate/hooks.py", line 253, in init_hook
set_module_tensor_to_device(module, name, self.execution_device)
File "/lib/accelerate/src/accelerate/utils/modeling.py", line 277, in set_module_tensor_to_device
raise ValueError(f"{tensor_name} is on the meta device, we need a `value` to put in on {device}.")
ValueError: inv_freq is on the meta device, we need a `value` to put in on 0.
```
### Expected behavior
The above loading should work
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25555/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25555/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25554
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25554/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25554/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25554/events
|
https://github.com/huggingface/transformers/issues/25554
| 1,853,943,033 |
I_kwDOCUB6oc5ugOz5
| 25,554 |
LLaMA-2 runtime error after `resize_token_embeddings`
|
{
"login": "gargutsav",
"id": 110483261,
"node_id": "U_kgDOBpXXPQ",
"avatar_url": "https://avatars.githubusercontent.com/u/110483261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gargutsav",
"html_url": "https://github.com/gargutsav",
"followers_url": "https://api.github.com/users/gargutsav/followers",
"following_url": "https://api.github.com/users/gargutsav/following{/other_user}",
"gists_url": "https://api.github.com/users/gargutsav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gargutsav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gargutsav/subscriptions",
"organizations_url": "https://api.github.com/users/gargutsav/orgs",
"repos_url": "https://api.github.com/users/gargutsav/repos",
"events_url": "https://api.github.com/users/gargutsav/events{/privacy}",
"received_events_url": "https://api.github.com/users/gargutsav/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi @gargutsav\r\n\r\nCould you provide a self-contained code snippet please? Thanks!",
"@ydshieh I was able to recreate it with this snippet\r\n```python\r\nimport torch\r\nfrom datasets import load_dataset\r\nfrom transformers import (AutoModelForCausalLM, AutoTokenizer,\r\n DataCollatorForLanguageModeling, Trainer,\r\n TrainingArguments)\r\n\r\nif __name__ == \"__main__\":\r\n model = AutoModelForCausalLM.from_pretrained(\r\n \"meta-llama/Llama-2-13b-hf\",\r\n trust_remote_code=True,\r\n torch_dtype=torch.bfloat16,\r\n device_map=\"auto\",\r\n )\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(\r\n \"meta-llama/Llama-2-13b-hf\", use_fast=False\r\n )\r\n tokenizer.pad_token = tokenizer.eos_token\r\n tokenizer.add_special_tokens(\r\n {\"additional_special_tokens\": [\"<<SYS>>\", \"<</SYS>>\", \"[INST]\", \"[/INST]\"]}\r\n )\r\n model.resize_token_embeddings(len(tokenizer))\r\n\r\n # Prepare dataset\r\n eli5 = load_dataset(\"eli5\", split=\"train_asks[:5000]\")\r\n eli5 = eli5.train_test_split(test_size=0.2)\r\n eli5 = eli5.flatten()\r\n\r\n def preprocess_function(examples):\r\n return tokenizer(\r\n [\" \".join(x) for x in examples[\"answers.text\"]],\r\n truncation=True,\r\n max_length=1024,\r\n )\r\n\r\n tokenized_eli5 = eli5.map(\r\n preprocess_function,\r\n batched=True,\r\n num_proc=4,\r\n remove_columns=eli5[\"train\"].column_names,\r\n )\r\n block_size = 128\r\n\r\n def group_texts(examples):\r\n concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}\r\n total_length = len(concatenated_examples[list(examples.keys())[0]])\r\n result = {\r\n k: [t[i: i + block_size] for i in range(0, total_length, block_size)]\r\n for k, t in concatenated_examples.items()\r\n }\r\n result[\"labels\"] = result[\"input_ids\"].copy()\r\n return result\r\n\r\n lm_dataset = tokenized_eli5.map(group_texts, batched=True, num_proc=4)\r\n\r\n training_arguments = TrainingArguments(\r\n output_dir=\"test\",\r\n group_by_length=True,\r\n lr_scheduler_type=\"cosine\",\r\n bf16=True,\r\n report_to=\"none\",\r\n )\r\n data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)\r\n\r\n trainer = Trainer(\r\n model=model,\r\n args=training_arguments,\r\n train_dataset=lm_dataset[\"train\"],\r\n eval_dataset=lm_dataset[\"test\"],\r\n data_collator=data_collator,\r\n )\r\n trainer.train()\r\n```\r\nIt works okay, if we get rid of the `additional_tokens` part\r\n```python\r\ntokenizer.add_special_tokens({\"additional_special_tokens\": [\"<<SYS>>\", \"<</SYS>>\", \"[INST]\", \"[/INST]\"]})\r\nmodel.resize_token_embeddings(len(tokenizer))\r\n```",
"Hello, cc @SunMarc who knows more about `device_map`",
"Hi @gargutsav, I think the problem is that by resizing the token_embedding, it will modify both the tokens_embedding and the lm_head. In this process, the hooks are were dispatched previously (using device_map = 'auto') on these modules no longer exist. Thus, we have this issue. I will try to fix this behavior in a future PR !",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@SunMarc is this fixed in https://github.com/huggingface/transformers/pull/25596? What version of `transformers` has this update?",
"> @SunMarc is this fixed in #25596? What version of `transformers` has this update?\r\n\r\nnvm see it's fixed in 4.32.0, thank you!"
] | 1,692 | 1,695 | 1,695 |
NONE
| null |
### System Info
Python: `3.8.17`
Transformers: `4.31.0`
Accelerate: `0.21.0`
GPUs: `8 x A100 (80GB)`
### Who can help?
@ArthurZucker @pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Load `LLaMA-13B` with the following arguments using `AutoModelForCausalLM.from_pretrained`
> device_map="auto"
> torch_dtype=torch.bfloat16
2. Add special tokens to the tokenizer, and do `model.resize_token_embeddings(len(tokenizer))`
3. Error shows in the forward pass
```
File "/opt/conda/envs/llm/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 824, in forward
logits = self.lm_head(hidden_states)
File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:6 and cuda:7! (when checking argument for argument mat2 in method wrapper_CUDA_mm)
```
### Expected behavior
When no additional tokens are added to the tokenizer, then there is no error. So the error probably originates from the `device_map` being incorrect after `resize_token_embeddings`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25554/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25554/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25553
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25553/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25553/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25553/events
|
https://github.com/huggingface/transformers/pull/25553
| 1,853,884,946 |
PR_kwDOCUB6oc5YGVmE
| 25,553 |
Update trainer.py
|
{
"login": "yundai424",
"id": 43726198,
"node_id": "MDQ6VXNlcjQzNzI2MTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/43726198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yundai424",
"html_url": "https://github.com/yundai424",
"followers_url": "https://api.github.com/users/yundai424/followers",
"following_url": "https://api.github.com/users/yundai424/following{/other_user}",
"gists_url": "https://api.github.com/users/yundai424/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yundai424/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yundai424/subscriptions",
"organizations_url": "https://api.github.com/users/yundai424/orgs",
"repos_url": "https://api.github.com/users/yundai424/repos",
"events_url": "https://api.github.com/users/yundai424/events{/privacy}",
"received_events_url": "https://api.github.com/users/yundai424/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25553). All of your documentation changes will be reflected on that endpoint."
] | 1,692 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
# What does this PR do?
The corresponding config field should be `"forward_prefetch"` https://github.com/pytorch/pytorch/blob/v2.0.1/torch/distributed/fsdp/fully_sharded_data_parallel.py#L343 but there was a typo in transformers.Trainer that tries to look at "forward_perfect"..
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25553/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25553/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25553",
"html_url": "https://github.com/huggingface/transformers/pull/25553",
"diff_url": "https://github.com/huggingface/transformers/pull/25553.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25553.patch",
"merged_at": 1692252633000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25552
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25552/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25552/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25552/events
|
https://github.com/huggingface/transformers/issues/25552
| 1,853,776,884 |
I_kwDOCUB6oc5ufmP0
| 25,552 |
SSL fails and we can't pull from HuggingFaceHub
|
{
"login": "Sleemanmunk",
"id": 202122,
"node_id": "MDQ6VXNlcjIwMjEyMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/202122?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sleemanmunk",
"html_url": "https://github.com/Sleemanmunk",
"followers_url": "https://api.github.com/users/Sleemanmunk/followers",
"following_url": "https://api.github.com/users/Sleemanmunk/following{/other_user}",
"gists_url": "https://api.github.com/users/Sleemanmunk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sleemanmunk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sleemanmunk/subscriptions",
"organizations_url": "https://api.github.com/users/Sleemanmunk/orgs",
"repos_url": "https://api.github.com/users/Sleemanmunk/repos",
"events_url": "https://api.github.com/users/Sleemanmunk/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sleemanmunk/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This is a duplicate of #17611, our hub was down for a bit yesterday, everything should be back up now. ",
"We had this problem all day. The solutions in #17611 do not work, and the problem persists today.",
"On my minimal build I was able to reproduce the SSL workaround by setting CURL_CA_BUNDLE and downgrading requests, \r\n\r\nbut that only leads to this error:\r\n\r\n File \"/mnt/c/Users/saleem/Desktop/test.py\", line 2, in <module>\r\n tokenizer = AutoTokenizer.from_pretrained('gpt2')\r\n File \"/home/saleem/.local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py\", line 652, in from_pretrained\r\n tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)\r\n File \"/home/saleem/.local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py\", line 496, in get_tokenizer_config\r\n resolved_config_file = cached_file(\r\n File \"/home/saleem/.local/lib/python3.10/site-packages/transformers/utils/hub.py\", line 417, in cached_file\r\n resolved_file = hf_hub_download(\r\n File \"/home/saleem/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py\", line 118, in _inner_fn\r\n return fn(*args, **kwargs)\r\n File \"/home/saleem/.local/lib/python3.10/site-packages/huggingface_hub/file_download.py\", line 1214, in hf_hub_download\r\n raise OSError(\"Distant resource does not seem to be on huggingface.co (missing commit header).\")\r\nOSError: Distant resource does not seem to be on huggingface.co (missing commit header).",
"Then I invite you to read [this tutorial](https://saturncloud.io/blog/facing-ssl-error-with-huggingface-pretrained-models-a-comprehensive-guide/#:~:text=SSL%20errors%20with%20Huggingface%20pretrained%20models%20can%20be%20frustrating%2C%20but,back%20to%20your%20NLP%20projects.). Hub is working alright on our ends",
"> This is a duplicate of #17611, our hub was down for a bit yesterday, everything should be back up now.\r\n\r\nYes, I also faced a similar issue, It is back to normal now. ",
"Closing this issue as the root problem [seemed to come from the firewall](https://github.com/huggingface/huggingface_hub/issues/1600#issuecomment-1684391061)."
] | 1,692 | 1,694 | 1,694 |
NONE
| null |
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.4.72-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
SSL is failing with Huggingfacehub and we can't pull even a simple tokenizer. Yesterday with our full implementation this worked. This example is minimal with a fresh build to avoid confounding factors.
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
Traceback (most recent call last):
File "/home/saleem/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 467, in _make_request
self._validate_conn(conn)
File "/home/saleem/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1092, in _validate_conn
conn.connect()
File "/home/saleem/.local/lib/python3.10/site-packages/urllib3/connection.py", line 642, in connect
sock_and_verified = _ssl_wrap_socket_and_match_hostname(
File "/home/saleem/.local/lib/python3.10/site-packages/urllib3/connection.py", line 783, in _ssl_wrap_socket_and_match_hostname
ssl_sock = ssl_wrap_socket(
File "/home/saleem/.local/lib/python3.10/site-packages/urllib3/util/ssl_.py", line 469, in ssl_wrap_socket
ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls, server_hostname)
File "/home/saleem/.local/lib/python3.10/site-packages/urllib3/util/ssl_.py", line 513, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
File "/usr/lib/python3.10/ssl.py", line 513, in wrap_socket
return self.sslsocket_class._create(
File "/usr/lib/python3.10/ssl.py", line 1071, in _create
self.do_handshake()
File "/usr/lib/python3.10/ssl.py", line 1342, in do_handshake
self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/saleem/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 790, in urlopen
response = self._make_request(
File "/home/saleem/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 491, in _make_request
raise new_e
urllib3.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/saleem/.local/lib/python3.10/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
File "/home/saleem/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 844, in urlopen
retries = retries.increment(
File "/home/saleem/.local/lib/python3.10/site-packages/urllib3/util/retry.py", line 515, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /gpt2/resolve/main/tokenizer_config.json (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007)')))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/mnt/c/Users/saleem/Desktop/test.py", line 2, in <module>
tokenizer = AutoTokenizer.from_pretrained('gpt2')
File "/home/saleem/.local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 652, in from_pretrained
tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
File "/home/saleem/.local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 496, in get_tokenizer_config
resolved_config_file = cached_file(
File "/home/saleem/.local/lib/python3.10/site-packages/transformers/utils/hub.py", line 417, in cached_file
resolved_file = hf_hub_download(
File "/home/saleem/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/home/saleem/.local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1195, in hf_hub_download
metadata = get_hf_file_metadata(
File "/home/saleem/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/home/saleem/.local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1532, in get_hf_file_metadata
r = _request_wrapper(
File "/home/saleem/.local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 407, in _request_wrapper
response = _request_wrapper(
File "/home/saleem/.local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 442, in _request_wrapper
return http_backoff(
File "/home/saleem/.local/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 258, in http_backoff
response = session.request(method=method, url=url, **kwargs)
File "/home/saleem/.local/lib/python3.10/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/home/saleem/.local/lib/python3.10/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/home/saleem/.local/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 63, in send
return super().send(request, *args, **kwargs)
File "/home/saleem/.local/lib/python3.10/site-packages/requests/adapters.py", line 517, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /gpt2/resolve/main/tokenizer_config.json (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007)')))"), '(Request ID: a7ce5e76-2190-4579-b0e7-655993202fd7)')
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('gpt2')
### Expected behavior
Tokenizer downloads successfully
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25552/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25552/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25551
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25551/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25551/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25551/events
|
https://github.com/huggingface/transformers/pull/25551
| 1,853,692,616 |
PR_kwDOCUB6oc5YFryA
| 25,551 |
Fix how we get a quantization method from config
|
{
"login": "Rexhaif",
"id": 5154447,
"node_id": "MDQ6VXNlcjUxNTQ0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5154447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rexhaif",
"html_url": "https://github.com/Rexhaif",
"followers_url": "https://api.github.com/users/Rexhaif/followers",
"following_url": "https://api.github.com/users/Rexhaif/following{/other_user}",
"gists_url": "https://api.github.com/users/Rexhaif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rexhaif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rexhaif/subscriptions",
"organizations_url": "https://api.github.com/users/Rexhaif/orgs",
"repos_url": "https://api.github.com/users/Rexhaif/repos",
"events_url": "https://api.github.com/users/Rexhaif/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rexhaif/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @younesbelkada and @SunMarc ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25551). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,692 | 1,695 | 1,695 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #25550
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25551/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25551/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25551",
"html_url": "https://github.com/huggingface/transformers/pull/25551",
"diff_url": "https://github.com/huggingface/transformers/pull/25551.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25551.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25550
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25550/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25550/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25550/events
|
https://github.com/huggingface/transformers/issues/25550
| 1,853,670,367 |
I_kwDOCUB6oc5ufMPf
| 25,550 |
Quantization model initialization routine tries to use quantization_config as if it is a dict
|
{
"login": "Rexhaif",
"id": 5154447,
"node_id": "MDQ6VXNlcjUxNTQ0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5154447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rexhaif",
"html_url": "https://github.com/Rexhaif",
"followers_url": "https://api.github.com/users/Rexhaif/followers",
"following_url": "https://api.github.com/users/Rexhaif/following{/other_user}",
"gists_url": "https://api.github.com/users/Rexhaif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rexhaif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rexhaif/subscriptions",
"organizations_url": "https://api.github.com/users/Rexhaif/orgs",
"repos_url": "https://api.github.com/users/Rexhaif/repos",
"events_url": "https://api.github.com/users/Rexhaif/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rexhaif/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Thanks for the reproducer. Pinging @younesbelkada ",
"Hi @Rexhaif , thanks for reporting. I'm unable to reproduce this. Can you give me a minimal reproducible exmaple ? Make sure that you are on the main branch. It is strange that it is calling `quantization_method_from_config = config.quantization_config.get(` as the config should not have the `quantization_config` field if we are not loading quantized weights. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,692 | 1,695 | 1,695 |
CONTRIBUTOR
| null |
The whole problem is here https://github.com/huggingface/transformers/blob/36f183ebab1261b388739d628aaa0b4150068df0/src/transformers/modeling_utils.py#L2389C34-L2389C34
One part of quantized model initialization tries to use `.get(...)` method of `quantization_config`. Presumably, this was made with the assumption that `quantization_config` would be a dict loaded from some json. However, this is not always the case.
For some reason it is not caught by tests but is was caught by me during training with modified [run_classification.py](https://github.com/huggingface/transformers/blob/36f183ebab1261b388739d628aaa0b4150068df0/examples/pytorch/text-classification/run_classification.py).
Here is customized model_init function:
```python
def model_init():
if not model_args.use_lora and model_args.n_bits in {4, 8}:
raise ValueError("4 and 8bit modes can only be used with LoRA")
if model_args.n_bits not in {4, 8, 16, 32}:
raise ValueError("Only 4, 8, 16, and 32bit modes are supported")
model_kwargs = {}
if model_args.n_bits == 4:
model_kwargs['quantization_config'] = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_quant_type='nf4',
bnb_4bit_use_double_quant=True
)
model_kwargs['torch_dtype'] = torch.bfloat16
elif model_args.n_bits == 8:
model_kwargs['quantization_config'] = BitsAndBytesConfig(
load_in_8bit=True
)
model = AutoModelForSequenceClassification.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
token=model_args.token,
trust_remote_code=model_args.trust_remote_code,
ignore_mismatched_sizes=model_args.ignore_mismatched_sizes,
**model_kwargs
)
if model_args.use_lora:
if model_args.n_bits in {4, 8}:
model = peft.prepare_model_for_kbit_training(model)
lora_config = peft.LoraConfig(
task_type=peft.TaskType.SEQ_CLS,
r=128, lora_alpha=64,
lora_dropout=0.05,
inference_mode=False
)
model = peft.get_peft_model(model, lora_config)
model.print_trainable_parameters()
return model
```
And here is the stack trace:
```python
Traceback (most recent call last):
File "/workspace/notebooks/wmt23/run_regression.py", line 786, in <module>
main()
File "/workspace/notebooks/wmt23/run_regression.py", line 724, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 1509, in train
self.model = self.call_model_init(trial)
File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 1239, in call_model_init
model = self.model_init()
File "/workspace/notebooks/wmt23/run_regression.py", line 554, in model_init
model = AutoModelForSequenceClassification.from_pretrained(
File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py", line 516, in from_pretrained
return model_class.from_pretrained(
File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 2389, in from_pretrained
quantization_method_from_config = config.quantization_config.get(
AttributeError: 'BitsAndBytesConfig' object has no attribute 'get'
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25550/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25550/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25549
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25549/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25549/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25549/events
|
https://github.com/huggingface/transformers/pull/25549
| 1,853,639,725 |
PR_kwDOCUB6oc5YFgRe
| 25,549 |
Fix `torch.fx` tests on nightly CI
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Good for me! Do you know for which models we had this triggered?\r\n\r\nAll of them, you can see it on\r\n\r\n[internal slack channel](https://huggingface.slack.com/archives/C02E34229JA/p1691944652716439)"
] | 1,692 | 1,692 | 1,692 |
COLLABORATOR
| null |
# What does this PR do?
There is a new `_CodeOnlyModule` type introduced in nightly torch (I have no idea about this). Need an update on our code to make `torch.fx` work for nightly (and future) torch.
Current error:
```
tests/models/layoutlm/test_modeling_layoutlm.py::LayoutLMModelTest::test_torch_fx
(line 753) AssertionError: Couldn't serialize / deserialize the traced model: Could not generate input named bbox for because root is not a transformers.PreTrainedModel.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25549/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25549/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25549",
"html_url": "https://github.com/huggingface/transformers/pull/25549",
"diff_url": "https://github.com/huggingface/transformers/pull/25549.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25549.patch",
"merged_at": 1692259375000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25548
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25548/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25548/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25548/events
|
https://github.com/huggingface/transformers/pull/25548
| 1,853,600,479 |
PR_kwDOCUB6oc5YFXtq
| 25,548 |
Fix MPT CI
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Merge now. Leave comment if any @younesbelkada when you are back 🙏 ",
"Looks great @ydshieh , thanks! ",
"Hello"
] | 1,692 | 1,692 | 1,692 |
COLLABORATOR
| null |
# What does this PR do?
Hello, let's just fix this ....
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25548/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25548/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25548",
"html_url": "https://github.com/huggingface/transformers/pull/25548",
"diff_url": "https://github.com/huggingface/transformers/pull/25548.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25548.patch",
"merged_at": 1692255986000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25547
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25547/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25547/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25547/events
|
https://github.com/huggingface/transformers/pull/25547
| 1,853,581,487 |
PR_kwDOCUB6oc5YFTlD
| 25,547 |
🚨🚨🚨 Vivit update default rescale_factor value
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,692 | 1,692 | 1,692 |
COLLABORATOR
| null |
# What does this PR do?
Updates the default value for `rescale_factor` in line with the updates that happened in #25174 and in the model's checkpoints on the hub [1](https://huggingface.co/google/vivit-b-16x2-kinetics400/commit/8a7171a57f79b9aaa58bc8d977c002a0ea0f0d42), [2](https://huggingface.co/google/vivit-b-16x2/commit/fc341053d36b42d446b3ffccdbd52452712a23f3)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25547/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25547/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25547",
"html_url": "https://github.com/huggingface/transformers/pull/25547",
"diff_url": "https://github.com/huggingface/transformers/pull/25547.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25547.patch",
"merged_at": 1692261357000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25546
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25546/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25546/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25546/events
|
https://github.com/huggingface/transformers/issues/25546
| 1,853,507,992 |
I_kwDOCUB6oc5uekmY
| 25,546 |
Andromeda
|
{
"login": "kyegomez",
"id": 98760976,
"node_id": "U_kgDOBeL5EA",
"avatar_url": "https://avatars.githubusercontent.com/u/98760976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kyegomez",
"html_url": "https://github.com/kyegomez",
"followers_url": "https://api.github.com/users/kyegomez/followers",
"following_url": "https://api.github.com/users/kyegomez/following{/other_user}",
"gists_url": "https://api.github.com/users/kyegomez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kyegomez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kyegomez/subscriptions",
"organizations_url": "https://api.github.com/users/kyegomez/orgs",
"repos_url": "https://api.github.com/users/kyegomez/repos",
"events_url": "https://api.github.com/users/kyegomez/events{/privacy}",
"received_events_url": "https://api.github.com/users/kyegomez/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false | null |
[] |
[] | 1,692 | 1,692 | null |
NONE
| null |
### Model description
## **Andromeda Specs**: Unveiling Mastery
**Overview**
Elegantly marrying craftsmanship and technology, Andromeda is not just another step in AI evolution. It's a giant leap. Driven by precision, powered by innovation, and defined by excellence, Andromeda is the epitome of intelligence realized. Here, we detail the marvel that is Andromeda, in numbers, facts, and logic.
---
### **Specifications**
| **Feature** | **Specification** |
|----------------------------------------------|-----------------------------------------------|
| **Sequence Handling** | Ultra Long (32,000 - 200,000+ context lengths)|
| **Processing Speed** | Ultra Fast (32,000+ tokens in < 100ms) |
| **Reasoning Abilities** | Creativity, Quantitative |
| **Attention Mechanism** | Flash Attention 2.0 Triton |
| **Memory Consumption** (compared to GPT-3) | 100x Less |
| **Memory Consumption** (compared to LLAMA) | 30x Less |
| **Max Sequence Processing Speed** | 100,000+ sequences in < 300ms |
| **Dataset Strategy** | Books, Falcon, Redpajama, Math, Code |
| **Functionality** | FSDP, HF Accelerate, Poetry Composition, API Calls, and more |
---
### **Benchmarks**
**Speed**: At the heart of Andromeda's unparalleled capabilities is its raw speed. Leveraging the prowess of Flash Attention 2.0 Triton, it doesn't merely process data; it blazes through it. This power allows it to consume 50x less memory than its predecessor, GPT-3, and 10x less than LLAMA.
---
### **Why Andromeda?**
- **Performance**: Andromeda isn't about doing things faster; it's about doing them the best. Reliable processing of sequences, even as extensive as 100,000+ lengths, is realized in the blink of an eye, under 300ms.
- **Precision and Creativity**: The dataset strategy is no mere algorithm. It's a symphony, meticulously crafted to offer both creativity and quantitative reasoning.
- **Versatility**: Andromeda doesn't just compute; it contemplates. Whether you need the flair of a poet or the precision of an API call, Andromeda delivers, seamlessly.
---
### **Andromeda Principles**
- **Efficiency**: It's not just about doing more; it's about doing better. Techniques like attention flashing, rotary position encodings, and deep normalization ensure every cycle, every operation, every byte is optimized for performance.
- **Flexibility**: In the ever-evolving world of technology, adaptability is king. Andromeda is designed to mold, adapt, and excel, irrespective of the task or domain.
- **Scalability**: Grow with you, for you. Andromeda isn't static. It's dynamic, designed to scale, accommodating growing resources and expanding data sizes.
- **Community-Driven**: Behind Andromeda's machine brain is the human heart of the community. It doesn't just utilize open source; it thrives on it, constantly evolving, learning, and improving with contributions from around the world.
For enthusiasts, developers, and thinkers looking to dive deeper, the Model Architecture documentation offers an exhaustive, detailed view into the intricate marvel that is Andromeda. Dive in, and witness engineering and artistry in harmony.
---
### **Andromeda: A Detailed Technical Overview**
At the intersection of technological ingenuity and groundbreaking design principles, Andromeda emerges. Representing the zenith of years of research and development, it promises a transformative leap in AI performance, efficiency, and versatility. In this technical specifications document, we deconstruct the intricacies of Andromeda, presenting a meticulous overview of its structure, performance metrics, and underlying methodologies.
## **Feature Insights**
### **Alibi Positional Bias**
Empowering Andromeda to discern relative positions between tokens, this feature accentuates its ability to grasp intricate relationships within a sequence.
### **Rotary Position Encodings (xpos)**
This is a revolutionary means of encoding positions, shrinking the model's memory demands and propelling training speeds.
### **Flash Attention**
This is the linchpin of Andromeda's speed prowess, minimizing attention computations, thus boosting training and inference phases.
### **Deep Normalization (deepnorm)**
By normalizing activations, deep normalization shores up training stability, allowing Andromeda to identify intricate patterns with finesse.
## **Feature Insights (Contd.)**
### **Attn One KV Head (Multiquery Attention)**
A breakthrough in attention mechanism design, this feature allows for simultaneous computation of multiple queries against the same set of key-values, fostering speed and efficiency.
### **QK Norm & Attention QK Norm**
These two features introduce a normalization step in the query and key matrices. This step facilitates stabilization in the attention mechanism, rendering it more robust and enabling it to scale with larger input sizes.
### **Attention QK Norm Dimension Scale**
A sophisticated adjustment to the attention mechanism, it modulates the normalization scale in accordance to the dimensions of the model. The result is a more adaptive and responsive attention framework.
### **Embedding Provider**
At the foundation of Andromeda, this module facilitates the embedding process, converting token sequences into dense vectors. Tailored for Andromeda, it ensures rapid and efficient embedding processes.
---
## **Deeper Dive: Model Parameters**
Unpacking Andromeda means diving deep into the parameters that shape its capabilities. Here's a granular view:
| **Parameter** | **Description** | **Default Value** |
|-----------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------|
| **num_tokens** | Total number of tokens in the vocabulary. | 50432 |
| **max_seq_len** | Maximum sequence length the model can process. | 8192 |
| **dim** | Dimension size of the model. It represents the size of embeddings and general depth in neural layers. | 2560 |
| **depth** | Represents the number of transformer layers in the architecture. | 32 |
| **dim_head** | Dimension size of each head in multi-head attention mechanism. | 128 |
| **heads** | Total number of heads in multi-head attention. | 24 |
| **use_abs_pos_emb** | Boolean flag to determine if absolute positional embeddings are used. | False |
| **alibi_pos_bias** | Enables the alibi positional bias in attention mechanisms. | True |
| **alibi_num_heads** | Specifies the number of heads for the alibi positional bias. | 12 |
| **rotary_xpos** | Determines if rotary positional encodings are utilized. | True |
| **attn_flash** | Flag to activate the Flash Attention mechanism, minimizing computations in the attention phase. | True |
| **shift_tokens** | The number of tokens by which input sequences are shifted. Essential for certain sequence-to-sequence tasks. | 1 |
| **attn_one_kv_head** | Activates multiquery attention by computing multiple queries against a singular key-value pair. | True |
| **qk_norm** | Enables the query-key normalization mechanism in the attention phase. | True |
| **attn_qk_norm** | A more advanced version of query-key normalization that scales according to the model's dimensions. | True |
| **attn_qk_norm_dim_scale** | Modulates the scale of the aforementioned attention normalization based on the model's dimensionality. | True |
| **embedding_provider** | The module responsible for providing embeddings. Custom providers can be passed for tailored embedding processes. | AndromedaEmbedding|
---
## **Insights and Techniques**
#### **1. Floating-Point Operations (FLOPs)**
Considering the number of FLOPs is paramount. It provides a metric to gauge the computational intensity and, by extension, the potential speed of the model.
#### **2. Flash Attention 2.0 Triton**
Enhanced with CUDA, this method offers a significant surge in the number of FLOPs the model can handle, amplifying its overall efficiency.
#### **3. Mixed Precision Training**
By embracing mixed precision, Andromeda realizes a noteworthy uptick in training speed while achieving commendable memory efficiency.
#### **4. Deepspeed 3 with NVMe Integration**
This powerful combination paves the way for superlative optimization during the training phase.
#### **5. 8-bit Optimizer**
Further pushing the boundaries of speed, the 8-bit optimizer boosts processing times without compromising the integrity of results.
#### **6. Gradient Clipping**
This technique has been integrated into the training regimen, achieving a massive speedup and preventing undesirable spikes during the process.
#### **7. Advanced Techniques: XPOS, ALIBI, QK Layernorm**
These sophisticated techniques are harnessed for superior extrapolation, interpolation, and stabilization during training.
#### **8. Multi Query Attention**
This approach has been adopted to supercharge decoding speeds.
#### **9. Parallelized Transformer Blocks**
Ensuring that the model's performance is consistently high, these blocks run in tandem to provide a smooth and efficient operational experience.
#### **10. Shifted Tokens**
In a strategic move, Andromeda sidesteps traditional positional embeddings, relying instead on shifted tokens for sequence length progression.
#### **11. Positional Interpolation**
This innovative technique augments the model's ability to manage sequences more effectively.
#### **12. Optimized CUDA Embedding Function**
This function is tailored for peak performance, ensuring rapid and accurate computations.
#### **13. Nebula Loss Function**
Integrated into Andromeda, this polymorphic loss function is adept at handling multi-task training scenarios.
## **A Word on Optimization and Future Iterations**
As with any state-of-the-art model, Andromeda's design is an ever-evolving tapestry. This means iterative refinement. As feedback streams in and technology progresses, expect advancements in:
- **Model Pruning**: Trimming redundancies, bolstering efficiency.
- **Knowledge Distillation**: Harnessing the wisdom of larger models in smaller, more agile architectures.
- **Zero-Shot and Few-Shot Learning**: Broadening adaptability horizons.
- **Enhanced Data Augmentation**: Fortifying the model's grasp on varied, nuanced contexts.
- **Decentralized Training**: Tapping into the global hive-mind, harnessing the collaborative power of the community.
## **Potential Other Future Trajectories**
#### **1. Clearer Metrics**
There's always room to elevate the benchmarking rigor, especially concerning reasoning abilities.
#### **2. Robust Validation and Testing Environment**
Further fine-tuning of the testing environment can offer even more reliable validations of Andromeda's capabilities.
#### **3. Comprehensive Documentation**
To bolster transparency and replicability, detailed documentation covering every facet of Andromeda is on the horizon.
#### **4. Benchmarking Against Peers**
By juxtaposing Andromeda against its counterparts, its distinctive advantages can be spotlighted more effectively.
#### **5. Spotlight on Real-World Applications**
By highlighting tangible use-cases, the versatility and prowess of Andromeda can be showcased in palpable contexts.
#### **6. Model Interpretability**
Future iterations might delve deeper into model interpretability, especially for critical applications.
#### **7. Niche Customizations**
By tailoring Andromeda to meet specific niche needs, its adaptability and value proposition can be further enhanced.
#### **8. Collaborative Endeavors**
Engaging more intimately with the global research community could spawn collaborative projects, bringing diverse insights to the fore.
As we voyage further into the AI frontier, Andromeda stands as a beacon, illuminating the path forward, promising marvels yet to come. It's not just about machine intelligence; it's about the dance between human curiosity and machine capability.
---
Join us on this journey. Dive deeper, ask questions, innovate, and let's redefine what's possible, together.
### Open source status
- [X] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
https://github.com/kyegomez/Andromeda
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25546/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25546/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/25545
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25545/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25545/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25545/events
|
https://github.com/huggingface/transformers/pull/25545
| 1,853,491,740 |
PR_kwDOCUB6oc5YFABf
| 25,545 |
Add LayoutLM Head for Relation Extraction
|
{
"login": "yang0369",
"id": 41265511,
"node_id": "MDQ6VXNlcjQxMjY1NTEx",
"avatar_url": "https://avatars.githubusercontent.com/u/41265511?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yang0369",
"html_url": "https://github.com/yang0369",
"followers_url": "https://api.github.com/users/yang0369/followers",
"following_url": "https://api.github.com/users/yang0369/following{/other_user}",
"gists_url": "https://api.github.com/users/yang0369/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yang0369/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yang0369/subscriptions",
"organizations_url": "https://api.github.com/users/yang0369/orgs",
"repos_url": "https://api.github.com/users/yang0369/repos",
"events_url": "https://api.github.com/users/yang0369/events{/privacy}",
"received_events_url": "https://api.github.com/users/yang0369/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"It looks like your PR touches a lot of files and has a lot of conflicts. You probably need to start from a clean branch.",
"sry, accidentally merge it. pls ignore this."
] | 1,692 | 1,693 | 1,693 |
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25545/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25545/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25545",
"html_url": "https://github.com/huggingface/transformers/pull/25545",
"diff_url": "https://github.com/huggingface/transformers/pull/25545.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25545.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25544
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25544/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25544/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25544/events
|
https://github.com/huggingface/transformers/pull/25544
| 1,853,490,003 |
PR_kwDOCUB6oc5YE_pI
| 25,544 |
Fix `MaskFormerModelIntegrationTest` OOM
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,692 | 1,692 | 1,692 |
COLLABORATOR
| null |
# What does this PR do?
The change in #25297 has some effect of increased memory usage, and `tests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelIntegrationTest::test_with_segmentation_maps_and_loss` starts to fail when `MaskFormerModelIntegrationTest` is run as a whole.
This PR changes the image size in `MaskFormerModelIntegrationTest` to avoid OOM.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25544/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25544/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25544",
"html_url": "https://github.com/huggingface/transformers/pull/25544",
"diff_url": "https://github.com/huggingface/transformers/pull/25544.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25544.patch",
"merged_at": 1692202284000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25543
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25543/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25543/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25543/events
|
https://github.com/huggingface/transformers/pull/25543
| 1,853,385,769 |
PR_kwDOCUB6oc5YEo94
| 25,543 |
fix vit hybrid test
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,692 | 1,692 | 1,692 |
MEMBER
| null |
# What does this PR do ?
This PR fixes a [test](https://github.com/huggingface/transformers/actions/runs/5503465492/job/14897632810). This bug was introduced by this [PR](https://github.com/huggingface/accelerate/pull/1648) on accelerate library. Basically, for single gpu setup, when we use device_map = 'auto', we don't add hooks to the model anymore. Hence, the test has been failing because we need to move the input to the right device.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25543/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25543/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25543",
"html_url": "https://github.com/huggingface/transformers/pull/25543",
"diff_url": "https://github.com/huggingface/transformers/pull/25543.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25543.patch",
"merged_at": 1692198178000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25542
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25542/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25542/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25542/events
|
https://github.com/huggingface/transformers/issues/25542
| 1,853,385,523 |
I_kwDOCUB6oc5ueGsz
| 25,542 |
Could `compute_loss` in Trainer double calculate the loss of the model?
|
{
"login": "minhtriet",
"id": 2603847,
"node_id": "MDQ6VXNlcjI2MDM4NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2603847?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/minhtriet",
"html_url": "https://github.com/minhtriet",
"followers_url": "https://api.github.com/users/minhtriet/followers",
"following_url": "https://api.github.com/users/minhtriet/following{/other_user}",
"gists_url": "https://api.github.com/users/minhtriet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/minhtriet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/minhtriet/subscriptions",
"organizations_url": "https://api.github.com/users/minhtriet/orgs",
"repos_url": "https://api.github.com/users/minhtriet/repos",
"events_url": "https://api.github.com/users/minhtriet/events{/privacy}",
"received_events_url": "https://api.github.com/users/minhtriet/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Indeed, do you want to open a PR with a fix?",
"How do you think we should solve it. A quick and dirty way I am using is to pop() the labels instead of get().\r\n\r\n> On 16. Aug 2023, at 22:40, Sylvain Gugger ***@***.***> wrote:\r\n> \r\n> \r\n> Indeed, do you want to open a PR with a fix?\r\n> \r\n> —\r\n> Reply to this email directly, view it on GitHub, or unsubscribe.\r\n> You are receiving this because you authored the thread.\r\n",
"The doc example should use pop instead of get, yes."
] | 1,692 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
### System Info
```
- `adapter-transformers` version: 3.2.1
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.10.9
- Huggingface_hub version: 0.14.1
- PyTorch version (GPU?): 2.0.1+cpu (False)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
```
Code is run in a remote server
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
This is the code to subclass `Trainer`
```python
class CustomTrainer(Trainer):
def compute_loss(self, model, inputs, return_outputs=False):
labels = inputs.get("labels")
# forward pass
outputs = model(**inputs)
logits = outputs.get("logits")
# compute custom loss (suppose one has 3 labels with different weights)
loss_fct = nn.CrossEntropyLoss(weight=torch.tensor([1.0, 2.0, 3.0], device=model.device))
loss = loss_fct(logits.view(-1, self.model.config.num_labels), labels.view(-1))
return (loss, outputs) if return_outputs else loss
```
### Expected behavior
However, should `labels = inputs.get("labels")` becomes `labels = inputs.pop("labels")` because in HuggingFace model, as long as the `labels` key is in `inputs`, the model will calculate the loss separately. See this example from `RobertaForSequenceClassification`.
```python
if labels is not None:
if self.config.problem_type is None:
if self.num_labels == 1:
self.config.problem_type = "regression"
elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
self.config.problem_type = "single_label_classification"
else:
self.config.problem_type = "multi_label_classification"
if self.config.problem_type == "regression":
loss_fct = MSELoss()
if self.num_labels == 1:
loss = loss_fct(logits.squeeze(), labels.squeeze())
else:
loss = loss_fct(logits, labels)
elif self.config.problem_type == "single_label_classification":
loss_fct = CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
elif self.config.problem_type == "multi_label_classification":
loss_fct = BCEWithLogitsLoss()
loss = loss_fct(logits, labels)
```
Which part of the `trainer` overwrite the code above?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25542/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25542/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25541
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25541/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25541/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25541/events
|
https://github.com/huggingface/transformers/issues/25541
| 1,853,384,197 |
I_kwDOCUB6oc5ueGYF
| 25,541 |
can't generate text of given length
|
{
"login": "KeremZaman",
"id": 8274752,
"node_id": "MDQ6VXNlcjgyNzQ3NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8274752?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KeremZaman",
"html_url": "https://github.com/KeremZaman",
"followers_url": "https://api.github.com/users/KeremZaman/followers",
"following_url": "https://api.github.com/users/KeremZaman/following{/other_user}",
"gists_url": "https://api.github.com/users/KeremZaman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KeremZaman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KeremZaman/subscriptions",
"organizations_url": "https://api.github.com/users/KeremZaman/orgs",
"repos_url": "https://api.github.com/users/KeremZaman/repos",
"events_url": "https://api.github.com/users/KeremZaman/events{/privacy}",
"received_events_url": "https://api.github.com/users/KeremZaman/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Pinging @gante! ",
"Hey @KeremZaman 👋 \r\n\r\nWhile we can't touch the line that sets `eos_token_id` to the default `self.generation_config.eos_token_id`, for backwards compatibility reasons, you can set `eos_token_id` to an impossible value to achieve the same goal :) \r\n\r\ne.g.:\r\n\r\n```python\r\nfrom transformers import AutoTokenizer, GenerationConfig, AutoModelForSeq2SeqLM\r\n\r\ntokenizer = AutoTokenizer.from_pretrained('t5-small')\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained('t5-small')\r\n\r\ngeneration_config = GenerationConfig(\r\n bos_token_id=tokenizer.eos_token_id,\r\n eos_token_id=-1,\r\n pad_token=model.config.pad_token_id,\r\n num_beams=1,\r\n do_sample=False\r\n )\r\n\r\ninputs = tokenizer(\"It's what it is.\", return_tensors=\"pt\")\r\nlength = inputs['input_ids'].shape[0] + 20\r\nprint(length)\r\noutputs = model.generate(**inputs, generation_config=generation_config, max_length=length)\r\n\r\nprint(outputs.shape)\r\n# torch.Size([1, 21])\r\n```",
"@gante That's a very clever workaround! Thanks for the quick response!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,692 | 1,695 | 1,695 |
NONE
| null |
### System Info
Python 3.7.13
transformers 4.30.2
Ubuntu 18.04.6 LTS
### Who can help?
@ArthurZucker @younesbelkada @gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
In generation config, the `eos_token_id` is optional, so we expect it to generate a text of `max_length` length.
Example code:
```
from transformers import AutoTokenizer, GenerationConfig, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained('t5-small')
model = AutoModelForSeq2SeqLM.from_pretrained('t5-small')
generation_config = GenerationConfig(
bos_token_id=tokenizer.eos_token_id,
eos_token_id=None,
pad_token=model.config.pad_token_id,
num_beams=1,
do_sample=False
)
inputs = tokenizer("It's what it is.", return_tensors="pt")
length = inputs['input_ids'].shape[0] + 20
print(length)
outputs = model.generate(**inputs, generation_config=generation_config, max_length=length)
print(outputs.shape)
```
Sample output:
```
21
torch.Size([1, 14])
```
This happens because the model produces EOS token way before it reaches to 21st token.
### Expected behavior
The expected output is as follows:
```
21
torch.Size([1, 21])
```
This probably occurs becuse of the following line in `greedy_search` function:
https://github.com/huggingface/transformers/blob/66fd3a8d626a32989f4569260db32785c6cbf42a/src/transformers/generation/utils.py#L2290
It uses `self.generation_config.eos_token_id` if given one is `None` and there is no place where given generation configuration is assigned to `self.generation_config`. Since the default value of `self.genertion_config.eos_token_id` is not None, it overrides the precedence of the given length when EOS token is encountered.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25541/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25541/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25540
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25540/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25540/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25540/events
|
https://github.com/huggingface/transformers/pull/25540
| 1,853,368,957 |
PR_kwDOCUB6oc5YElVC
| 25,540 |
More frozen args
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,692 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
# What does this PR do?
Noticed some failures in the Accelerate nightly related to deepspeed (see [here](https://github.com/huggingface/accelerate/actions/runs/5877296024/job/15937618182)), the hyperparameter search needs to be able to modify the `training_arguments` (as that makes sense, it's doing HPS). As a result, manually sets/unsets `_frozen` during HPS. Not too worried about the pattern since the function is already "private"
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger @pacman100
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25540/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25540/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25540",
"html_url": "https://github.com/huggingface/transformers/pull/25540",
"diff_url": "https://github.com/huggingface/transformers/pull/25540.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25540.patch",
"merged_at": 1692202791000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25539
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25539/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25539/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25539/events
|
https://github.com/huggingface/transformers/pull/25539
| 1,853,310,099 |
PR_kwDOCUB6oc5YEYmD
| 25,539 |
Generate: fix default max length warning
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@ArthurZucker Good catch! Added there as well 👍 ",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,692 | 1,692 | 1,692 |
MEMBER
| null |
# What does this PR do?
Fixes the generate max length warning -- it should be emitted when it is `== 20`, corresponding to the model-agnostic default value.
A test is also added to ensure we don't regress.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25539/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25539/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25539",
"html_url": "https://github.com/huggingface/transformers/pull/25539",
"diff_url": "https://github.com/huggingface/transformers/pull/25539.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25539.patch",
"merged_at": 1692196254000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25538
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25538/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25538/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25538/events
|
https://github.com/huggingface/transformers/issues/25538
| 1,853,242,935 |
I_kwDOCUB6oc5udj43
| 25,538 |
MusicGen small model is not using GPU
|
{
"login": "mepc36",
"id": 16109633,
"node_id": "MDQ6VXNlcjE2MTA5NjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/16109633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mepc36",
"html_url": "https://github.com/mepc36",
"followers_url": "https://api.github.com/users/mepc36/followers",
"following_url": "https://api.github.com/users/mepc36/following{/other_user}",
"gists_url": "https://api.github.com/users/mepc36/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mepc36/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mepc36/subscriptions",
"organizations_url": "https://api.github.com/users/mepc36/orgs",
"repos_url": "https://api.github.com/users/mepc36/repos",
"events_url": "https://api.github.com/users/mepc36/events{/privacy}",
"received_events_url": "https://api.github.com/users/mepc36/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"A reproducer on which code you are using is required according to the contribution guidelines. Otherwise yes, you are the onde that should specify the device. ",
"Sorry about that @ArthurZucker , I just added the code I'm using to generate audio.\r\n\r\nSo I just re-wrote the reproduction code by trying to pass a `device='cuda'` argument like this...\r\n\r\n```\r\nmodel = MusicgenForConditionalGeneration.from_pretrained(\"facebook/musicgen-small\", device='cuda')\r\n```\r\n\r\n...but got the following `unexpected keyword argument` error when doing so:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/opt/homebrew/Caskroom/miniforge/base/envs/riffusion/lib/python3.9/site-packages/flask/app.py\", line 2528, in wsgi_app\r\n response = self.full_dispatch_request()\r\n File \"/opt/homebrew/Caskroom/miniforge/base/envs/riffusion/lib/python3.9/site-packages/flask/app.py\", line 1825, in full_dispatch_request\r\n rv = self.handle_user_exception(e)\r\n File \"/opt/homebrew/Caskroom/miniforge/base/envs/riffusion/lib/python3.9/site-packages/flask_cors/extension.py\", line 165, in wrapped_function\r\n return cors_after_request(app.make_response(f(*args, **kwargs)))\r\n File \"/opt/homebrew/Caskroom/miniforge/base/envs/riffusion/lib/python3.9/site-packages/flask/app.py\", line 1823, in full_dispatch_request\r\n rv = self.dispatch_request()\r\n File \"/opt/homebrew/Caskroom/miniforge/base/envs/riffusion/lib/python3.9/site-packages/flask/app.py\", line 1799, in dispatch_request\r\n return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)\r\n File \"/Users/martinconnor/Desktop/rapbot-riffusion/riffusion/server.py\", line 269, in infer_musicgen\r\n model = MusicgenForConditionalGeneration.from_pretrained(\"facebook/musicgen-small\", device='cuda')\r\n File \"/opt/homebrew/Caskroom/miniforge/base/envs/riffusion/lib/python3.9/site-packages/transformers/models/musicgen/modeling_musicgen.py\", line 1599, in from_pretrained\r\n return super().from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs)\r\n File \"/opt/homebrew/Caskroom/miniforge/base/envs/riffusion/lib/python3.9/site-packages/transformers/modeling_utils.py\", line 2700, in from_pretrained\r\n model = cls(config, *model_args, **model_kwargs)\r\nTypeError: __init__() got an unexpected keyword argument 'device'\r\n```\r\n\r\nSo then I tried passing the `device` argument to the processor, like so...\r\n\r\n```\r\n processor = AutoProcessor.from_pretrained(\"facebook/musicgen-small\", device='cuda')\r\n```\r\n\r\n...and that did not error, but looking through the specs for the `from_pretrained` function definition, I think it just ignored the `device` argument.\r\n\r\nAnyone help you can provide would be greatly appreciated, thanks again! And sorry for the crappy initial bug report, that was just plain laziness on my part. My apologies",
"Inviting you to ask this on the forum, as there are some similar [questions](https://discuss.huggingface.co/t/is-transformers-using-gpu-by-default/8500).\r\nTLDR; the device cannot be specify this way you need to call `module.to('cuda')`. The device argument is for the `pipelines`. ",
"Thanks @ArthurZucker ! I got it working. Here is my reproduction code with a fix so that it uses GPU. You're right, call `.to('cuda')` on both the model AND the inputs got it working:\r\n\r\n```\r\nfrom transformers import AutoProcessor, MusicgenForConditionalGeneration\r\n\r\nprompt = '80s EDM music'\r\nprocessor = AutoProcessor.from_pretrained(\"facebook/musicgen-small\")\r\nmodel = MusicgenForConditionalGeneration.from_pretrained(\"facebook/musicgen-small\")\r\nmodel = model.to('cuda:0')\r\n\r\ninputs = processor(\r\n text=[prompt],\r\n padding=True,\r\n return_tensors=\"pt\"\r\n)\r\ninputs = inputs.to('cuda:0')\r\n\r\naudio_values = model.generate(**inputs, max_new_tokens=256)\r\nprint('audio_values:', audio_values)\r\n```\r\n\r\nClosing! Thanks again."
] | 1,692 | 1,692 | 1,692 |
NONE
| null |
### System Info
```
root@redacted:/# transformers-cli env
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.31.0
- Platform: Linux-5.4.0-156-generic-x86_64-with-glibc2.17
- Python version: 3.8.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.2
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: trying to, but it's not working
- Using distributed or parallel set-up in script?: no
```
I'm trying to use the MusicGen model on a GPU. I've got the results of `nvidia-smi` during the run. Here they are. As you can see, the amount of GPU utilized is not increasing:
```
root@85795c5b68bd:/# nvidia-smi -l 1
Wed Aug 16 13:22:55 2023
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.86.05 Driver Version: 535.86.05 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA RTX A4500 On | 00000000:C1:00.0 Off | Off |
| 30% 29C P8 13W / 200W | 5533MiB / 20470MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
+---------------------------------------------------------------------------------------+
Wed Aug 16 13:22:56 2023
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.86.05 Driver Version: 535.86.05 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA RTX A4500 On | 00000000:C1:00.0 Off | Off |
| 30% 29C P8 13W / 200W | 5533MiB / 20470MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
+---------------------------------------------------------------------------------------+
```
I'm wondering how I can get the MusicGen small model to utilize GPU. Do I have to pass `device='cuda'` as an argument somewhere?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The machine I'm testing on has the following specs:
```
1 x RTX A4500
12 vCPU 62 GB RAM
```
Here is the code I'm using to generate audio:
```
from transformers import AutoProcessor, MusicgenForConditionalGeneration
processor = AutoProcessor.from_pretrained("facebook/musicgen-small")
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
inputs = processor(
text=[prompt],
padding=True,
return_tensors="pt",
)
audio_values = model.generate(**inputs, max_new_tokens=max_new_tokens)
```
### Expected behavior
I expect that the GPU would be utilized.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25538/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25537
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25537/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25537/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25537/events
|
https://github.com/huggingface/transformers/issues/25537
| 1,853,227,332 |
I_kwDOCUB6oc5udgFE
| 25,537 |
ValueError: Unrecognized configuration class <class>
|
{
"login": "innat",
"id": 17668390,
"node_id": "MDQ6VXNlcjE3NjY4Mzkw",
"avatar_url": "https://avatars.githubusercontent.com/u/17668390?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/innat",
"html_url": "https://github.com/innat",
"followers_url": "https://api.github.com/users/innat/followers",
"following_url": "https://api.github.com/users/innat/following{/other_user}",
"gists_url": "https://api.github.com/users/innat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/innat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/innat/subscriptions",
"organizations_url": "https://api.github.com/users/innat/orgs",
"repos_url": "https://api.github.com/users/innat/repos",
"events_url": "https://api.github.com/users/innat/events{/privacy}",
"received_events_url": "https://api.github.com/users/innat/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey! This is expected, the `DebertaV2ForMultipleChoice` class is not implemented for `tensorflow`. You should check the documentation for [Debertav2](https://huggingface.co/docs/transformers/model_doc/deberta-v2), which documents classes that are available.",
"Is there any plan to add it?\r\n\r\n",
"Not really on our side no. But taking inspiration on what is done for the models listed : \r\n```python\r\nAlbertConfig, BertConfig, CamembertConfig, ConvBertConfig, DistilBertConfig, \r\nElectraConfig, FlaubertConfig, FunnelConfig, LongformerConfig, MobileBertConfig, MPNetConfig, RemBertConfig, \r\nRobertaConfig, RobertaPreLayerNormConfig, RoFormerConfig, XLMConfig, XLMRobertaConfig, XLNetConfig.\r\n```\r\nshould be pretty straightforward to implement! 🤗 feel free to open a PR ! "
] | 1,692 | 1,693 | 1,693 |
NONE
| null |
### System Info
```
transformer v. 4.31.0'
````
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I tried to load `'microsoft/deberta-v3-large'` using AutoModelForMultipleChoice and TFAutoModelForMultipleChoice class. But I got error with TF* class.
```python
from transformers import AutoModelForMultipleChoice
from transformers import TFAutoModelForMultipleChoice
deberta_v3_large = 'microsoft/deberta-v3-large'
# OK
torch_model = AutoModelForMultipleChoice.from_pretrained(deberta_v3_large)
# NOT OK
tf_model = TFAutoModelForMultipleChoice.from_pretrained(deberta_v3_large)
```
Errro logs
```
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ in <module>:1 │
│ │
│ ❱ 1 tf_model = TFAutoModelForMultipleChoice.from_pretrained(deberta_v3_large) │
│ 2 │
│ │
│ /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:496 in │
│ from_pretrained │
│ │
│ 493 │ │ │ return model_class.from_pretrained( │
│ 494 │ │ │ │ pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, │
│ 495 │ │ │ ) │
│ ❱ 496 │ │ raise ValueError( │
│ 497 │ │ │ f"Unrecognized configuration class {config.__class__} for this kind of AutoM │
│ 498 │ │ │ f"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapp │
│ 499 │ │ ) │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: Unrecognized configuration class <class
'transformers.models.deberta_v2.configuration_deberta_v2.DebertaV2Config'> for this kind of AutoModel:
TFAutoModelForMultipleChoice.
Model type should be one of AlbertConfig, BertConfig, CamembertConfig, ConvBertConfig, DistilBertConfig,
ElectraConfig, FlaubertConfig, FunnelConfig, LongformerConfig, MobileBertConfig, MPNetConfig, RemBertConfig,
RobertaConfig, RobertaPreLayerNormConfig, RoFormerConfig, XLMConfig, XLMRobertaConfig, XLNetConfig.
```
### Expected behavior
They should work in same manner!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25537/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25537/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25536
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25536/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25536/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25536/events
|
https://github.com/huggingface/transformers/issues/25536
| 1,853,198,488 |
I_kwDOCUB6oc5udZCY
| 25,536 |
Not able to import T5WithLMHeadModel library from transformers
|
{
"login": "SameepPanigrahi",
"id": 59465094,
"node_id": "MDQ6VXNlcjU5NDY1MDk0",
"avatar_url": "https://avatars.githubusercontent.com/u/59465094?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SameepPanigrahi",
"html_url": "https://github.com/SameepPanigrahi",
"followers_url": "https://api.github.com/users/SameepPanigrahi/followers",
"following_url": "https://api.github.com/users/SameepPanigrahi/following{/other_user}",
"gists_url": "https://api.github.com/users/SameepPanigrahi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SameepPanigrahi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SameepPanigrahi/subscriptions",
"organizations_url": "https://api.github.com/users/SameepPanigrahi/orgs",
"repos_url": "https://api.github.com/users/SameepPanigrahi/repos",
"events_url": "https://api.github.com/users/SameepPanigrahi/events{/privacy}",
"received_events_url": "https://api.github.com/users/SameepPanigrahi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey! This model is very very old, and the `T5WithLMHeadModel` was renamed to `T5ForConditionalGeneration`, but for backward compatibility it's not the case here. Now if you want to load a model you should be using `AutoModel` and not the shared script. ",
"Hi I have tried with the AutoModel to load the \"t5-3b\" library. Its able to load the model successfully but in our case we are using pipeline for the model inference. So while doing the inference I am getting the below error. Can you please suggest an approach to make the code up and running \r\nAttaching the code for your reference\r\n\r\nfrom transformers import AutoModel, AutoTokenizer, pipeline\r\nimport transformers\r\nmodel = AutoModel.from_pretrained(\"t5-3b\")\r\ntokenizer = AutoTokenizer.from_pretrained(\"t5-3b\")\r\npipeline = pipeline(task=\"translation\", model=model, tokenizer=tokenizer)\r\npipeline(\"Make an offer\")\r\n\r\nGetting the below error while using this \r\nTypeError: The current model class (T5Model) is not compatible with `.generate()`\r\n",
"Hey @SameepPanigrahi 👋 \r\n\r\nWhen you come across basic usage issues, try to check the [model card](https://huggingface.co/t5-3b) and, if it is a `transformers` model, the [model doc page](https://huggingface.co/docs/transformers/model_doc/t5#t5). Check the examples -- there you will find the information you need to use the model.\r\n\r\n(I'm intentionally not giving the answer here, since GitHub issues should only be used after some exploration on the user end :) )",
"Hi @gante \r\nI am trying to run 40 different model for different task from a single code script which I have created. So for that I am trying to load the library from which I need to load the model with this\r\n\r\nmodel_detail = AutoConfig.from_pretrained(\"t5-3b\")\r\nmodel_library_name = model_detail.to_dict()[\"architectures\"][0]\r\nmodel_library = getattr(transformers, model_library_name)\r\n\r\nthen I am loading the model with the help of the model_library and doing a local inference using the pipeline . Maximum number of models are working with this approach but for t5-3b its failing . If its a single model then I can use the example mentioned the documentation page but in my case I have to run 40 model with the same code . So in such cases can you please suggest an approach to make this t5-3b model success ",
"That field in the config holds the class that was used at model save time, so it might get stale. Ideally, that same class would load the model, as you're trying to do. However, we don't have a solution that programmatically solves all cases at load time.\r\n\r\nAn alternative could be some form of custom if/else, based on the Hub tags, if using the original class fails. E.g. `t5-3b` has the `text2text-generation` tag, which means the `AutoModelForSeq2Seq` class should be able to load it correctly."
] | 1,692 | 1,692 | 1,692 |
NONE
| null |
### System Info
transformers version :- 4.31.0
Python version :- 3.10
### Who can help?
@gante @ArthurZucker @sgugger
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
import transformers
from transformers import AutoConfig
model_detail = AutoConfig.from_pretrained("t5-3b")
model_library_name = model_detail.to_dict()["architectures"][0]
model_library = getattr(transformers, model_library_name)
### Expected behavior
It should able to load the T5WithLMHeadModel library from the transformers . Currently its giving me
AttributeError: module transformers has no attribute T5WithLMHeadModel
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25536/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25536/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25535
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25535/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25535/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25535/events
|
https://github.com/huggingface/transformers/pull/25535
| 1,853,001,910 |
PR_kwDOCUB6oc5YDVME
| 25,535 |
Layout lmre
|
{
"login": "yang0369",
"id": 41265511,
"node_id": "MDQ6VXNlcjQxMjY1NTEx",
"avatar_url": "https://avatars.githubusercontent.com/u/41265511?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yang0369",
"html_url": "https://github.com/yang0369",
"followers_url": "https://api.github.com/users/yang0369/followers",
"following_url": "https://api.github.com/users/yang0369/following{/other_user}",
"gists_url": "https://api.github.com/users/yang0369/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yang0369/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yang0369/subscriptions",
"organizations_url": "https://api.github.com/users/yang0369/orgs",
"repos_url": "https://api.github.com/users/yang0369/repos",
"events_url": "https://api.github.com/users/yang0369/events{/privacy}",
"received_events_url": "https://api.github.com/users/yang0369/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,692 | 1,692 | 1,692 |
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25535/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25535/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25535",
"html_url": "https://github.com/huggingface/transformers/pull/25535",
"diff_url": "https://github.com/huggingface/transformers/pull/25535.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25535.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25534
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25534/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25534/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25534/events
|
https://github.com/huggingface/transformers/pull/25534
| 1,852,897,210 |
PR_kwDOCUB6oc5YC-dY
| 25,534 |
Add documentation to dynamic module utils
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,692 | 1,692 | 1,692 |
COLLABORATOR
| null |
# What does this PR do?
This PR adds some more documentation to `dynamic_module_utils` as type annotations and docstrings.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25534/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25534/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25534",
"html_url": "https://github.com/huggingface/transformers/pull/25534",
"diff_url": "https://github.com/huggingface/transformers/pull/25534.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25534.patch",
"merged_at": 1692253686000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25533
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25533/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25533/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25533/events
|
https://github.com/huggingface/transformers/pull/25533
| 1,852,856,180 |
PR_kwDOCUB6oc5YC1m7
| 25,533 |
Fix nested configs of Jukebox
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,692 | 1,692 | 1,692 |
COLLABORATOR
| null |
# What does this PR do?
Fixes the serialization of the Jukebox config. It has a list of configs as an attribute, that's why the default method is failing here.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25533/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25533/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25533",
"html_url": "https://github.com/huggingface/transformers/pull/25533",
"diff_url": "https://github.com/huggingface/transformers/pull/25533.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25533.patch",
"merged_at": 1692179304000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25532
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25532/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25532/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25532/events
|
https://github.com/huggingface/transformers/pull/25532
| 1,852,715,727 |
PR_kwDOCUB6oc5YCXFc
| 25,532 |
Add Blip2 model in VQA pipeline
|
{
"login": "jpizarrom",
"id": 111236,
"node_id": "MDQ6VXNlcjExMTIzNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/111236?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jpizarrom",
"html_url": "https://github.com/jpizarrom",
"followers_url": "https://api.github.com/users/jpizarrom/followers",
"following_url": "https://api.github.com/users/jpizarrom/following{/other_user}",
"gists_url": "https://api.github.com/users/jpizarrom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jpizarrom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jpizarrom/subscriptions",
"organizations_url": "https://api.github.com/users/jpizarrom/orgs",
"repos_url": "https://api.github.com/users/jpizarrom/repos",
"events_url": "https://api.github.com/users/jpizarrom/events{/privacy}",
"received_events_url": "https://api.github.com/users/jpizarrom/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @amyeroberts and @younesbelkada ",
"Hi @amyeroberts and @younesbelkada, this PR is ready for review. Could you please take a look? Thanks :) ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25532). All of your documentation changes will be reflected on that endpoint.",
"> LGTM.\r\n> \r\n> I'm not a big fan of using the names of the configs directly to detect if generative or not, I feel like using the `ForXXX` should be a better hint.\r\n> \r\n> We also used `model.can_generate()` as a hint in other pipelines.\r\n> \r\n> Pinging @ylacombe who used that flag. (Just FYI no need to do anything).\r\n\r\n@Narsil, Thanks a lot for your feedback :)\r\n\r\nAt the moment model.can_generate() return False for Blip2ForConditionalGeneration, that is the reason why I was following the proposal of this non merged PR https://github.com/huggingface/transformers/pull/23348/files#diff-620bada7977c3d0040ed961581379598e53a9ef02fdbb26c570cac738c279c0eR64\r\n\r\nMaybe could it be expected that can_generate method returns True for Blip2ForConditionalGeneration? if this is the case, we could use it. (i will take a look on it)\r\n\r\ncan_generate returns True for another model, does it make sense to do this in Blip2ForConditionalGeneration, or it could affect something else?\r\nhttps://github.com/huggingface/transformers/blob/50573c648ae953dcc1b94d663651f07fb02268f4/src/transformers/models/speecht5/modeling_speecht5.py#L2782-L2787\r\n\r\n\r\n\r\n\r\n",
"I'm not the best person to comment on how `can_generate` works and what should or shouldn't be done.\r\n\r\nThe main thing about pipeline:\r\n\r\n- They should try to be model agnostic as much as possible so when newer models come in they work out of the box.\r\n\r\nBut the current code is acceptable.",
"Generalizable solution here ☝️ ",
"Feel free to tweet/linkedin about it @jpizarrom and we'll amplify :)",
"I use the newest library of transformers,but it still reports \"The model 'Blip2ForConditionalGeneration' is not supported for vqa. Supported models are ['ViltForQuestionAnswering'].\",so how can I use blip2 in pipeline to deal with the vqa task?Are there any test codes of BLIP2 in pipeline?"
] | 1,692 | 1,705 | 1,693 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Add Blip2ForConditionalGeneration model in VisualQuestionAnsweringPipeline.
Fixes part of #21110 and is based on #23348 #21227 .
## Who can review?
Hi @NielsRogge what do you think of this??
Thanks!
## TODOs
- [x] add Blip2 model in VQA pipeline
- [x] use require_torch_gpu in test
- [x] use can_generate in vqa pipeline for Blip2ForConditionalGeneration
- [x] use float16 in the test_large_model_pt_blip2
- [ ] check if it is necessary to cast the input in torch.float16 inside _forward
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25532/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25532/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25532",
"html_url": "https://github.com/huggingface/transformers/pull/25532",
"diff_url": "https://github.com/huggingface/transformers/pull/25532.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25532.patch",
"merged_at": 1693401377000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25531
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25531/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25531/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25531/events
|
https://github.com/huggingface/transformers/pull/25531
| 1,852,660,307 |
PR_kwDOCUB6oc5YCLH0
| 25,531 |
Add Llama2 resources
|
{
"login": "wonhyeongseo",
"id": 29195190,
"node_id": "MDQ6VXNlcjI5MTk1MTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/29195190?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wonhyeongseo",
"html_url": "https://github.com/wonhyeongseo",
"followers_url": "https://api.github.com/users/wonhyeongseo/followers",
"following_url": "https://api.github.com/users/wonhyeongseo/following{/other_user}",
"gists_url": "https://api.github.com/users/wonhyeongseo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wonhyeongseo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wonhyeongseo/subscriptions",
"organizations_url": "https://api.github.com/users/wonhyeongseo/orgs",
"repos_url": "https://api.github.com/users/wonhyeongseo/repos",
"events_url": "https://api.github.com/users/wonhyeongseo/events{/privacy}",
"received_events_url": "https://api.github.com/users/wonhyeongseo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25531). All of your documentation changes will be reflected on that endpoint.",
"Hello everyone,\r\n\r\nWe've been working on organizing resources related to LLaMA2, and we've noticed that many of the resources intersect category-wise. We're trying to ensure that the documentation is intuitive and helpful for users. Could anyone provide suggestions or best practices on how to best arrange these resources?\r\n\r\nAdditionally, after conducting a full-text search on the repository, we couldn't locate any notebooks specifically related to LLaMA2. If anyone is aware of such notebooks or has worked on one, could you kindly point us in the right direction?\r\n\r\nThank you in advance for your assistance!\r\n\r\nBest regards,\r\n@wonhyeongseo and @jungnerd",
"There are quite a few ressources that are indeed not properly linked. \r\nHere is one notebook: https://colab.research.google.com/drive/1PEQyJO1-f6j0S_XJ8DV50NkpzasXkrzd?usp=sharing%E2%80%A6 that has been shared + updated by a contributor! \r\n",
"Thank you for the pointer @ArthurZucker! With your guidance, we found googling `site:https://colab.research.google.com transformers llama2` to be quite effective and useful.\r\n\r\nWe have a couple of questions:\r\n- Would it be okay to add Llama2 to the [text-generation script](https://github.com/huggingface/transformers/blob/36f183ebab1261b388739d628aaa0b4150068df0/examples/pytorch/text-generation/run_generation.py#L40) and update the [llm tutorial example](https://github.com/huggingface/transformers/blob/36f183ebab1261b388739d628aaa0b4150068df0/docs/source/en/llm_tutorial.md?plain=1#L77) to Llama2? (I feel that since llama2 is still gated, we cannot at this time)\r\n- How would you like us to treat [spaces like this one](https://huggingface.co/TheBloke/llama-2-7B-Guanaco-QLoRA-GPTQ) for the resource section?\r\n- Would it be ok to add LLaMA resources on top of Llama2 as they are the [same family like GPT](https://github.com/huggingface/transformers/pull/20084#pullrequestreview-1179625719)? (cc. @stevhliu )\r\n > Nice, thank you for adding those. Now you just have a few more to go! 😁\r\n > \r\n > Take a look at the [OpenAI GPT2 resources page](https://huggingface.co/docs/transformers/main/en/model_doc/gpt2#resources) and feel free to add over whatever is missing here since the usage for OpenAI GPT is practically the same.\r\n\r\nWe will soon cover more models that are \"mainstream\" in Korea, translate the docs and contribute localized blogs and notebooks. Sorry for being so late on schedule.\r\n \r\nThank you so much for your friendly and honest guidance, hope we can have a call sometime.\r\n\r\nBest regards,\r\nWon",
"May you please review this PR, @stevhliu ? Thanks a ton for your help!",
"> Awesome, once we fix the tiny typo we're ready to merge!\r\n\r\nDone and dusted. Thank you so much for your help @stevhliu ! Looking forward to our Q&A session!!"
] | 1,692 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
Co-authored-by: @jungnerd @kihoon71
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds resources of Llama2 according to [this issue](https://github.com/huggingface/transformers/issues/20055).
This PR serves as an example to our OSSCA mentees who will contribute more models.
Part of #20055
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stevhliu, may you please review this PR?
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25531/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25531/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25531",
"html_url": "https://github.com/huggingface/transformers/pull/25531",
"diff_url": "https://github.com/huggingface/transformers/pull/25531.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25531.patch",
"merged_at": 1692749694000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25530
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25530/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25530/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25530/events
|
https://github.com/huggingface/transformers/pull/25530
| 1,852,493,849 |
PR_kwDOCUB6oc5YBnUr
| 25,530 |
TransformerM for Graph Classification
|
{
"login": "rudongyu",
"id": 16982108,
"node_id": "MDQ6VXNlcjE2OTgyMTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/16982108?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rudongyu",
"html_url": "https://github.com/rudongyu",
"followers_url": "https://api.github.com/users/rudongyu/followers",
"following_url": "https://api.github.com/users/rudongyu/following{/other_user}",
"gists_url": "https://api.github.com/users/rudongyu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rudongyu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rudongyu/subscriptions",
"organizations_url": "https://api.github.com/users/rudongyu/orgs",
"repos_url": "https://api.github.com/users/rudongyu/repos",
"events_url": "https://api.github.com/users/rudongyu/events{/privacy}",
"received_events_url": "https://api.github.com/users/rudongyu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 5724035499,
"node_id": "LA_kwDOCUB6oc8AAAABVS3Zqw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Model%20on%20the%20Hub",
"name": "Model on the Hub",
"color": "9CA0E9",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] |
[
"You should consider sharing your model via the [code on the Hub API](https://huggingface.co/docs/transformers/custom_models) which doesn't require a PR to Transformers. There are many things missing in the current draft (look at the [model addition guide](https://huggingface.co/docs/transformers/add_new_model) for pointers) and it would require less work to just put it on the Hub.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,692 | 1,695 | 1,695 |
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds Transformer-M model with DGL utilities for the graph classification task.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25530/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25530/timeline
| null | true |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25530",
"html_url": "https://github.com/huggingface/transformers/pull/25530",
"diff_url": "https://github.com/huggingface/transformers/pull/25530.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25530.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25529
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25529/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25529/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25529/events
|
https://github.com/huggingface/transformers/issues/25529
| 1,852,457,848 |
I_kwDOCUB6oc5uakN4
| 25,529 |
RuntimeError: CUDA error: an illegal instruction was encountered
|
{
"login": "dongxu",
"id": 289812,
"node_id": "MDQ6VXNlcjI4OTgxMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/289812?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dongxu",
"html_url": "https://github.com/dongxu",
"followers_url": "https://api.github.com/users/dongxu/followers",
"following_url": "https://api.github.com/users/dongxu/following{/other_user}",
"gists_url": "https://api.github.com/users/dongxu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dongxu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dongxu/subscriptions",
"organizations_url": "https://api.github.com/users/dongxu/orgs",
"repos_url": "https://api.github.com/users/dongxu/repos",
"events_url": "https://api.github.com/users/dongxu/events{/privacy}",
"received_events_url": "https://api.github.com/users/dongxu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"That's an issue for this particular repo since they use code on the Hub, the model itself is not implemented in the Transformers library :-)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,692 | 1,695 | 1,695 |
NONE
| null |
### System Info
When I run “AutoModelForCausalLM.from_pretrained” or chat function, I sometimes get "RuntimeError: CUDA error: an illegal instruction was encountered", but sometimes it works. Any one has ideas?
The error message is below:
> RuntimeError Traceback (most recent call last)
> Cell In[14], line 1
> ----> 1 model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-7B-Chat", device_map="auto", trust_remote_code=True).eval()
>
> File ~\AppData\Roaming\Python\Python310\site-packages\transformers\models\auto\auto_factory.py:488, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
> 486 else:
> 487 cls.register(config.__class__, model_class, exist_ok=True)
> --> 488 return model_class.from_pretrained(
> 489 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs
> 490 )
> 491 elif type(config) in cls._model_mapping.keys():
> 492 model_class = _get_model_class(config, cls._model_mapping)
>
> File ~\AppData\Roaming\Python\Python310\site-packages\transformers\modeling_utils.py:2824, in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, config, cache_dir, ignore_mismatched_sizes, force_download, local_files_only, token, revision, use_safetensors, *model_args, **kwargs)
> 2819 logger.warn(
> 2820 "This model has some weights that should be kept in higher precision, you need to upgrade "
> 2821 "`accelerate` to properly deal with them (`pip install --upgrade accelerate`)."
> 2822 )
> 2823 if device_map != "sequential":
> -> 2824 max_memory = get_balanced_memory(
> 2825 model,
> 2826 dtype=target_dtype,
> 2827 low_zero=(device_map == "balanced_low_0"),
> 2828 max_memory=max_memory,
> 2829 **kwargs,
> 2830 )
> 2831 kwargs["max_memory"] = max_memory
> 2832 # Make sure tied weights are tied before creating the device map.
>
> File ~\AppData\Roaming\Python\Python310\site-packages\accelerate\utils\modeling.py:731, in get_balanced_memory(model, max_memory, no_split_module_classes, dtype, special_dtypes, low_zero)
> 703 """
> 704 Compute a `max_memory` dictionary for [`infer_auto_device_map`] that will balance the use of each available GPU.
> 705
> (...)
> 728 Transformers generate function).
> 729 """
> 730 # Get default / clean up max_memory
> --> 731 max_memory = get_max_memory(max_memory)
> 733 if not (torch.cuda.is_available() or is_xpu_available()) or is_mps_available():
> 734 return max_memory
>
> File ~\AppData\Roaming\Python\Python310\site-packages\accelerate\utils\modeling.py:624, in get_max_memory(max_memory)
> 622 if not is_xpu_available():
> 623 for i in range(torch.cuda.device_count()):
> --> 624 _ = torch.tensor([0], device=i)
> 625 max_memory = {i: torch.cuda.mem_get_info(i)[0] for i in range(torch.cuda.device_count())}
> 626 else:
>
> RuntimeError: CUDA error: an illegal instruction was encountered
> CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
> For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
> Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
just run the code blow:
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig
# Note: The default behavior now has injection attack prevention off.
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-7B-Chat", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-7B-Chat", device_map="auto", trust_remote_code=True).eval()
# Specify hyperparameters for generation
model.generation_config = GenerationConfig.from_pretrained("Qwen/Qwen-7B-Chat", trust_remote_code=True) # 可指定不同的生成长度、top_p等相关超参
# 第一轮对话 1st dialogue turn
response, history = model.chat(tokenizer, "你好", history=None)
print(response)
but not 100% happens. Sometimes work fine, sometimes not.
### Expected behavior
I just need to run smoothly and get response.I want to know the solution.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25529/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25529/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25528
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25528/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25528/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25528/events
|
https://github.com/huggingface/transformers/issues/25528
| 1,852,448,487 |
I_kwDOCUB6oc5uah7n
| 25,528 |
General Question - Is it possible to do MLM on models like MPT ? If yes, Could you share any resources or relevant codebase related to it?
|
{
"login": "Tarun3679",
"id": 34520609,
"node_id": "MDQ6VXNlcjM0NTIwNjA5",
"avatar_url": "https://avatars.githubusercontent.com/u/34520609?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tarun3679",
"html_url": "https://github.com/Tarun3679",
"followers_url": "https://api.github.com/users/Tarun3679/followers",
"following_url": "https://api.github.com/users/Tarun3679/following{/other_user}",
"gists_url": "https://api.github.com/users/Tarun3679/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tarun3679/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tarun3679/subscriptions",
"organizations_url": "https://api.github.com/users/Tarun3679/orgs",
"repos_url": "https://api.github.com/users/Tarun3679/repos",
"events_url": "https://api.github.com/users/Tarun3679/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tarun3679/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep issues for bugs and feature requests only.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,692 | 1,695 | 1,695 |
NONE
| null |
### System Info
the current version should be fine.
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Is there a way to use AutoModelForCausalLM using MLM.
### Expected behavior
Being able to do MLM pertaining and fine-tuning on the LLMs like MPT.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25528/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25528/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25527
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25527/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25527/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25527/events
|
https://github.com/huggingface/transformers/pull/25527
| 1,852,430,617 |
PR_kwDOCUB6oc5YBZ3o
| 25,527 |
Draft
|
{
"login": "lishukan",
"id": 23066239,
"node_id": "MDQ6VXNlcjIzMDY2MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/23066239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lishukan",
"html_url": "https://github.com/lishukan",
"followers_url": "https://api.github.com/users/lishukan/followers",
"following_url": "https://api.github.com/users/lishukan/following{/other_user}",
"gists_url": "https://api.github.com/users/lishukan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lishukan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lishukan/subscriptions",
"organizations_url": "https://api.github.com/users/lishukan/orgs",
"repos_url": "https://api.github.com/users/lishukan/repos",
"events_url": "https://api.github.com/users/lishukan/events{/privacy}",
"received_events_url": "https://api.github.com/users/lishukan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,692 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25527/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25527/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25527",
"html_url": "https://github.com/huggingface/transformers/pull/25527",
"diff_url": "https://github.com/huggingface/transformers/pull/25527.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25527.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25526
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25526/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25526/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25526/events
|
https://github.com/huggingface/transformers/issues/25526
| 1,852,317,048 |
I_kwDOCUB6oc5uaB14
| 25,526 |
Add FastViT model
|
{
"login": "atturaioe",
"id": 76523524,
"node_id": "MDQ6VXNlcjc2NTIzNTI0",
"avatar_url": "https://avatars.githubusercontent.com/u/76523524?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/atturaioe",
"html_url": "https://github.com/atturaioe",
"followers_url": "https://api.github.com/users/atturaioe/followers",
"following_url": "https://api.github.com/users/atturaioe/following{/other_user}",
"gists_url": "https://api.github.com/users/atturaioe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/atturaioe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/atturaioe/subscriptions",
"organizations_url": "https://api.github.com/users/atturaioe/orgs",
"repos_url": "https://api.github.com/users/atturaioe/repos",
"events_url": "https://api.github.com/users/atturaioe/events{/privacy}",
"received_events_url": "https://api.github.com/users/atturaioe/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false | null |
[] |
[
"I can go and try to add it in case",
"It's from Apple, and the repo. has 1.2K. How do you think about this one? @amyeroberts @NielsRogge ?",
"I'm pro! @atturaioe, if you want to add, feel free to open a PR and tag one of us or @rafaelpadilla when the PR is ready for review or you have questions about adding it to the library. ",
"Got it, I'm already on it."
] | 1,692 | 1,693 | null |
CONTRIBUTOR
| null |
### Model description
FastViT is a hybrid vision transformer that uses structural reparameterization to obtain lower memory access cost and increased capacity, achieving stateof-the-art accuracy-latency trade-off. Highly efficient on multiple compute fabrics: mobile devices and desktop grade GPUs.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Paper: https://arxiv.org/pdf/2303.14189.pdf
Github(code and weights): https://github.com/apple/ml-fastvit
Authors: @anuragranj, @pavank-apple
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25526/reactions",
"total_count": 7,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25526/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/25525
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25525/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25525/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25525/events
|
https://github.com/huggingface/transformers/issues/25525
| 1,852,310,030 |
I_kwDOCUB6oc5uaAIO
| 25,525 |
End2end pytorch lightning errors
|
{
"login": "albertsun1",
"id": 114700193,
"node_id": "U_kgDOBtYvoQ",
"avatar_url": "https://avatars.githubusercontent.com/u/114700193?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertsun1",
"html_url": "https://github.com/albertsun1",
"followers_url": "https://api.github.com/users/albertsun1/followers",
"following_url": "https://api.github.com/users/albertsun1/following{/other_user}",
"gists_url": "https://api.github.com/users/albertsun1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertsun1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertsun1/subscriptions",
"organizations_url": "https://api.github.com/users/albertsun1/orgs",
"repos_url": "https://api.github.com/users/albertsun1/repos",
"events_url": "https://api.github.com/users/albertsun1/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertsun1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I guess you are also using a newer Transformers version. My advice is to use the latest Transformers and Lightning. I can help with debugging the lightning errors. \r\n",
"Hey, thanks so much for responding so quickly. When I upgraded to the latest stable version of Lightning (2.0.7) and Transformers (4.31), I ran into an issue where the most recent update of PyTorch Lightning removed support for `pl.Trainer.add_argparse_args` (https://github.com/hpcaitech/ColossalAI/issues/2938). As such, I got the following error: \r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/albertsun/transformers/examples/research_projects/rag-end2end-retriever/finetune_rag.py\", line 810, in <module>\r\n parser = pl.Trainer.add_argparse_args(parser)\r\nAttributeError: type object 'Trainer' has no attribute 'add_argparse_args'\r\n```\r\n\r\nI'm not too familiar with PyTorch Lightning; do you know if there's a work-around for this parser code in `finetune_rag.py`? Thanks!",
"yes, the latest version of Trainer doesn't have such a thing. \r\n\r\nYou can manually enter.\r\n\r\nhttps://lightning.ai/docs/pytorch/stable/common/trainer.html\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,692 | 1,695 | 1,695 |
NONE
| null |
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-4.19.0-25-cloud-amd64-x86_64-with-glibc2.28
- Python version: 3.9.17
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Y
- Using distributed or parallel set-up in script?: Y
### Who can help?
@shamanez
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hey @shamanez, I'm attempting to run `sh ./test_run/test_finetune.sh` using one GPU. Unfortunately, I've been running into errors with PyTorch lightning. I've tried using PyTorch Lightning version 1.6.4 as recommended in the requirements.txt, but I've gotten errors. This other thread seemed to get the same type of bugs: #22210
- **PyTorch Lightning Versions 1.6/1.6.4/1.6.5**: I get the following error:
> pytorch_lightning.utilities.exceptions.MisconfigurationException: The provided lr scheduler `LambdaLR` doesn’t follow PyTorch’s LRScheduler API. You should override the `[LightningModule.lr](http://lightningmodule.lr/)_scheduler_step` hook with your own logic if you are using a custom LR scheduler.
I've also experimented with other versions to see if I could get it fixed, but it still doesn't work:
- **PyTorch Lightning Version 1.5**:
> pytorch_lightning.utilities.exceptions.MisconfigurationException: You passed `devices=auto` but haven’t specified `accelerator=(‘auto’|'tpu’|'gpu’|'ipu’|'cpu’)` for the devices mapping, got `accelerator=None`.
I tried adding `acclerator=‘gpu’` or `accelerator=‘auto’` as parameters to the Trainer code, but doing either simply gave me the same error.
- **PyTorch Lightning Versions 1.8/1.9**: I get the following error:
> module ‘pytorch_lightning’ has no attribute ‘profiler’
### Expected behavior
I'd expect the code to train a RAG end-to-end model, but it has this bug before we can start training the model.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25525/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25525/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25524
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25524/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25524/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25524/events
|
https://github.com/huggingface/transformers/pull/25524
| 1,852,110,838 |
PR_kwDOCUB6oc5YAT7L
| 25,524 |
Add ViTDet
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@amyeroberts thanks for your review, I've addressed all the comments"
] | 1,692 | 1,693 | 1,693 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds part 1 of #25051, namely the ViTDet backbone, introduced in [Exploring Plain Vision Transformer Backbones for Object Detection](https://arxiv.org/abs/2203.16527).
Note that this PR only adds the backbone, hence there are no compatible checkpoints with the backbone-only. Those can only be added once either VitMatte or Mask R-CNN are added, both of which use VitDet as backbone.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25524/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25524/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25524",
"html_url": "https://github.com/huggingface/transformers/pull/25524",
"diff_url": "https://github.com/huggingface/transformers/pull/25524.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25524.patch",
"merged_at": 1693299832000
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.