url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/24718
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24718/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24718/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24718/events
|
https://github.com/huggingface/transformers/issues/24718
| 1,794,902,750 |
I_kwDOCUB6oc5q_Are
| 24,718 |
Speech recognition with CTC runs not reproducible
|
{
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @bhavitvyamalik, \r\n\r\nIf running the `run_speech_recognition_ctc.py` script, could you share the command being run, including all argument settings? \r\n\r\nIs the script being run on a single or multiple GPUs? \r\n\r\ncc @sanchit-gandhi ",
"The script is being run on single GPU. I'm training on Multilingual Librispeech dataset (English version).\r\n\r\n```\r\nDOMAINS=\"mls_en\"\r\nPYTHON_FILE=${PROJECT_ROOT}/\"dom_finetune/run_speech_recognition_ctc.py\"\r\n\r\nCUDA_VISIBLE_DEVICES=0 python ${PYTHON_FILE} \\\r\n --model_name_or_path=\"facebook/hubert-base-ls960\" \\\r\n --domains=${DOMAINS} \\\r\n --num_train_epochs=\"20\" \\\r\n --per_device_train_batch_size=\"4\" \\\r\n --per_device_eval_batch_size=\"8\" \\\r\n --gradient_accumulation_steps=\"2\" \\\r\n --preprocessing_num_workers=\"16\" \\\r\n --learning_rate=\"3e-5\" \\\r\n --lr_scheduler_type=\"constant\" \\\r\n --logging_steps=\"25\" \\\r\n --evaluation_strategy=\"epoch\" \r\n --save_strategy=\"epoch\" \\\r\n --load_best_model_at_end=true \\\r\n --metric_for_best_model=\"wer\" \\\r\n --greater_is_better=false \\\r\n --text_column_name=\"transcription\" \\\r\n --length_column_name=\"input_length\" \\\r\n --layerdrop=\"0.0\" \\\r\n --save_total_limit=\"1\" \\\r\n --freeze_feature_encoder \\\r\n --chars_to_ignore , ? . ! \\\r\n --output_dir \"/disk/scratch1/\" \\\r\n --group_by_length \\\r\n --do_train --do_eval --do_predict\r\n```",
"@bhavitvyamalik Thanks for the additional information. Could you also share the WER results seen after different runs? i.e. how different are they typically? ",
"WER: 0.6475 and 0.651 using same seed 42. The loss numbers remain very similar initially but after a point (roughly after 2nd epoch) they start differing at 3rd decimal place",
"Hey @bhavitvyamalik - could you also share the script you're using to fine-tune? Since it differs from the original example script, it's not possible to say whether the non-determinism comes from the 🤗 Trainer, or the data pre-processing. I see that the data arguments in your script differ from those in the example, so would be interested in checking what data pre-processing strategy is employed!\r\n\r\nIt would also be super helpful to have the dataset as well so that we can run it locally as well for reproducibility",
"Hi @sanchit-gandhi, I'm using similar data pre-processing given in the official script. Line 412-421 is the only change I've done to the official script to use `audiofolder` functionality of `datasets`. I'm using 10h English data of MLS for training and full dev, test data for validation and testing respectively.\r\n\r\nHere is the link to the script: https://gist.github.com/bhavitvyamalik/948d6ca9f42e6c4d70fb8a2f037b4c88\r\n\r\nI will upload the dataset in a while to dataset hub. Thank you!",
"Link to dataset: https://huggingface.co/datasets/bhavitvyamalik/mls_english_10h\r\n",
"Thanks @bhavitvyamalik - running two runs concurrently now:\r\n1. Run 1: https://wandb.ai/sanchit-gandhi/huggingface/runs/0auf9oue?workspace=user-sanchit-gandhi\r\n2. Run 2: https://wandb.ai/sanchit-gandhi/huggingface/runs/jsbzkm4o?workspace=user-sanchit-gandhi",
"The runs are indeed not identical, e.g. comparing the eval loss:\r\n\r\n\r\n\r\nThis is pretty strange behaviour considering we fix the same seed in both cases and use the same training arguments.",
"cc'ing @muellerzr and @pacman100 here - for context, we're fine-tuning a CTC model for ASR using the examples script [run_speech_recognition_ctc.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py) and the arguments:\r\n\r\n<details>\r\n<summary> run_mls.sh </summary>\r\n\r\n```bash\r\n#!/usr/bin/env bash\r\n\r\npython run_speech_recognition_ctc.py \\\r\n --model_name_or_path=\"facebook/hubert-base-ls960\" \\\r\n --dataset_name \"bhavitvyamalik/mls_english_10h\" \\\r\n --num_train_epochs=\"20\" \\\r\n --per_device_train_batch_size=\"4\" \\\r\n --per_device_eval_batch_size=\"8\" \\\r\n --gradient_accumulation_steps=\"2\" \\\r\n --preprocessing_num_workers=\"16\" \\\r\n --train_split_name \"train\" \\\r\n --eval_split_name \"eval\" \\\r\n --learning_rate=\"3e-5\" \\\r\n --lr_scheduler_type=\"constant\" \\\r\n --logging_steps=\"25\" \\\r\n --evaluation_strategy=\"epoch\" \\\r\n --save_strategy=\"epoch\" \\\r\n --load_best_model_at_end True \\\r\n --metric_for_best_model=\"wer\" \\\r\n --greater_is_better False \\\r\n --text_column_name=\"transcription\" \\\r\n --length_column_name=\"input_length\" \\\r\n --layerdrop=\"0.0\" \\\r\n --save_total_limit=\"1\" \\\r\n --freeze_feature_encoder \\\r\n --chars_to_ignore , ? . ! \\\r\n --output_dir \"./\" \\\r\n --group_by_length \\\r\n --overwrite_output_dir \\\r\n --do_train \\\r\n --do_eval\r\n```\r\n\r\n</details>\r\n\r\nHowever, the training runs are not reproducible, even when we use the same seed. The runs do not give the same eval loss and eval WER between training runs (see above plot). The training loss also diverges after approx 800 training steps (see [logs](https://wandb.ai/sanchit-gandhi/huggingface?workspace=user-sanchit-gandhi)).\r\n\r\nWondering whether there's any non-determinism that we can try and investigate with the new `accelerate` powered trainer? Or whether we put this down to numerical differences?",
"Alright after leaving the runs to continue for the full length of training, we see that the run 1 and run 2 are to within 0.01 of each other on pretty much all metrics: https://wandb.ai/sanchit-gandhi/huggingface?workspace=user-sanchit-gandhi\r\n\r\nSo I think we can conclude the seed is set correctly (the differences would be much larger if this wasn't the case). So probably what we're seeing is the effect of numerical differences accumulated over many thousands of ops? I still would have thought the two runs would be exactly the same since I've run them on the same hardware, same env, same seed etc.\r\n\r\nWould be interested in hearing whether you agree here both!",
"@sanchit-gandhi Thanks for digging into this 🕵️♂️ ! \r\n\r\nYes, I agree, it looks there's just some small numerical differences creeping in. Given how tricky these things are to investigate and how small the differences are, it's not something I think is worth investigating further. \r\n\r\nIf someone from the community is interested and wants to dig into this more, then we will still welcome links to relevant write-ups or results in this issue. ",
"Doing a quick run with Transformers v4.27.4 to see whether the Trainer was reproducible to within 0.01 prior to the `accelerate` integration: https://wandb.ai/sanchit-gandhi/huggingface?workspace=user-sanchit-gandhi\r\n\r\nIf the behaviour is the same as it is on `main` with the `accelerate` back-end, I think we can safely conclude this is an accumulation of numerical errors.",
"We're looking into it on the accelerate side for fixing. Thanks for the flag",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Update: looks like the differences are approx the same for `transformers==main` as they are `transformers==4.27.4`(see https://wandb.ai/sanchit-gandhi/huggingface?workspace=user-sanchit-gandhi) => we can safely conclude this is an accumulation of numerical errors\r\n\r\n(click image below to get zoomed in version)\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,693 | 1,693 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.30.1
- Platform: Linux-3.10.0-1160.71.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.16.4
- PyTorch version (GPU?): 1.9.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@sg
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm using official `run_speech_recognition_ctc.py` on a single GPU. I ran this for pretrained `hubert` twice with same seed but every time I get different WER on test set.
### Expected behavior
Should return same WER when running with same seed.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24718/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24718/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24717
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24717/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24717/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24717/events
|
https://github.com/huggingface/transformers/issues/24717
| 1,794,877,546 |
I_kwDOCUB6oc5q-6hq
| 24,717 |
Possibly a bug in Pix2Struct outputs
|
{
"login": "artyomxyz",
"id": 5408270,
"node_id": "MDQ6VXNlcjU0MDgyNzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5408270?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/artyomxyz",
"html_url": "https://github.com/artyomxyz",
"followers_url": "https://api.github.com/users/artyomxyz/followers",
"following_url": "https://api.github.com/users/artyomxyz/following{/other_user}",
"gists_url": "https://api.github.com/users/artyomxyz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/artyomxyz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/artyomxyz/subscriptions",
"organizations_url": "https://api.github.com/users/artyomxyz/orgs",
"repos_url": "https://api.github.com/users/artyomxyz/repos",
"events_url": "https://api.github.com/users/artyomxyz/events{/privacy}",
"received_events_url": "https://api.github.com/users/artyomxyz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @artyomxyz, thanks for reporting this issue.\r\n\r\nYes, you're right, there is a current issue with indexing in the Pix2Struct model. Related PR: #23985\r\n\r\ncc @younesbelkada ",
"> Hi @loveisp ! Again thanks for your contribution on this Can you share with us why this PR got closed? The PR should also fix #24717 so it would be great to merge it :D\r\n\r\nI initially made the change like this, but it didn't pass all the tests. The len(layer_outputs) here seemed a bit strange, so I changed it to what came later. Even though it can pass all the tests, there are still issues with the underlying logic. Regarding this matter, you can take a look at my discussion with @amyeroberts . I realized that I cannot fix this bug in a short amount of time, so I closed it. Will it be sufficient to make this change as he suggested, so that it passes all the tests? If so, then go ahead and make the change.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"For reference, a later PR was merged which (should have) resolved this: #25200",
"Hello, I have a question, How does `Pix2StructForConditionalGeneration.decoder` works for `doc-vqa`? \r\nI have got output from `Pix2StructForConditionalGeneration.encoder` which includes `last_hidden_state=[some tensor], hidden_states=None, attentions=None`.\r\n\r\nI'm trying to manually use Encoder and Decoder part",
"Hi @rish-hyun. \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.\r\n\r\nIf you want to see examples of how to use a model, you can check out the docs and model code: \r\n* https://huggingface.co/docs/transformers/model_doc/pix2struct#transformers.Pix2StructForConditionalGeneration.forward.example\r\n* For example, you can see the model output of the [encoder here](https://github.com/huggingface/transformers/blob/e7b001db4fbd33d77de95cf684d13d7605660d1b/src/transformers/models/pix2struct/modeling_pix2struct.py#L654). ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,694 | 1,694 |
NONE
| null |
I'm sorry if I'm wrong, as I don't have much experience with transformers internals.
I was playing with Pix2Struct and trying to visualise attention on input image. `output.cross_attentions` shape didn't make much sense as it didn't have `patch_count` as any of dimensions. After inspecting `modeling_pix2struct.py` I have notices the following
```
# layer_outputs = hidden-states, key-value-states (self-attention position bias), (self-attention weights),
# (cross-attention position bias), (cross-attention weights)
```
And then later
```
if output_attentions:
all_attentions = all_attentions + (layer_outputs[2],)
all_cross_attentions = all_cross_attentions + (layer_outputs[3],)
```
As I understand `layer_outputs[3]` is `(self-attention weights)` and it should be replaces with `layer_outputs[5]` which is `(cross-attention weights)`. The same goes of `(layer_outputs[2],) => (layer_outputs[3],)`
Does it make sense, or am I getting something wrong?
I tried to patch it locally and output + visualisation make sense (highlight image patch with information in token)
https://github.com/huggingface/transformers/blob/fe861e578f50dc9c06de33cd361d2f625017e624/src/transformers/models/pix2struct/modeling_pix2struct.py#L1550C74-L1550C74
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24717/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24717/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24716
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24716/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24716/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24716/events
|
https://github.com/huggingface/transformers/issues/24716
| 1,794,840,648 |
I_kwDOCUB6oc5q-xhI
| 24,716 |
Loading pretrained RobertaModel,size missmatch error
|
{
"login": "aipursuing",
"id": 62983292,
"node_id": "MDQ6VXNlcjYyOTgzMjky",
"avatar_url": "https://avatars.githubusercontent.com/u/62983292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aipursuing",
"html_url": "https://github.com/aipursuing",
"followers_url": "https://api.github.com/users/aipursuing/followers",
"following_url": "https://api.github.com/users/aipursuing/following{/other_user}",
"gists_url": "https://api.github.com/users/aipursuing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aipursuing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aipursuing/subscriptions",
"organizations_url": "https://api.github.com/users/aipursuing/orgs",
"repos_url": "https://api.github.com/users/aipursuing/repos",
"events_url": "https://api.github.com/users/aipursuing/events{/privacy}",
"received_events_url": "https://api.github.com/users/aipursuing/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@aixiaobaikyh Please follow the issue template and fill out all the requesting information such as the transformers version being run. \r\n\r\nIf running on a version of transformers released in the past year, the error message shared here is not the full error message that is printed out. The final part instructs on how to resolve: \r\n```\r\n You may consider adding `ignore_mismatched_sizes=True` in the model `from_pretrained` method.\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,692 | 1,692 |
NONE
| null |
My code is as follows:
`config = RobertaConfig.from_pretrained("roberta-base", max_position_embeddings=2048)`
`model = RobertaModel.from_pretrained('roberta-base',config = config)`
then I get the following error:
`size mismatch for roberta.embeddings.position_embeddings.weight: copying a param with shape torch.Size([514, 768]) from checkpoint, the shape in current model is torch.Size([2048, 768]).`
how can I solve this problem? If I want to expand the length of input sentences, what should I do?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24716/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24716/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24715
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24715/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24715/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24715/events
|
https://github.com/huggingface/transformers/issues/24715
| 1,794,459,321 |
I_kwDOCUB6oc5q9Ua5
| 24,715 |
Generate function
|
{
"login": "Dongximing",
"id": 35741613,
"node_id": "MDQ6VXNlcjM1NzQxNjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/35741613?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dongximing",
"html_url": "https://github.com/Dongximing",
"followers_url": "https://api.github.com/users/Dongximing/followers",
"following_url": "https://api.github.com/users/Dongximing/following{/other_user}",
"gists_url": "https://api.github.com/users/Dongximing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dongximing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dongximing/subscriptions",
"organizations_url": "https://api.github.com/users/Dongximing/orgs",
"repos_url": "https://api.github.com/users/Dongximing/repos",
"events_url": "https://api.github.com/users/Dongximing/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dongximing/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @Dongximing, \r\n\r\nThe `decapoda-research/llama-7b-hf` checkpoint shouldn't be used. The tokenizer and weights were released before the Llama PR was merged and are not compatible with the Llama implementation in transformers. There are other checkpoints (e.g. [this one](https://huggingface.co/huggyllama/llama-7b)) which are compatible. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,692 | 1,692 |
NONE
| null |
### System Info
The model is llama and the tokenizer is also llama (decapoda-research/llama-7b-hf)
tokenizer = LlamaTokenizer.from_pretrained('decapoda-research/llama-7b-hf',add_special_tokens=False,add_bos_token = False)
I have a question about the model generation.
```python
prompt = """Tell me some things about NBA"""
input_tokenized_info = tokenizer(prompt, return_tensors="pt")
input_ids, attention_mask = input_tokenized_info['input_ids'], input_tokenized_info['attention_mask']
input_ids = input_ids.to('cuda')
attention_mask = attention_mask.to('cuda')
outputs = model.generate(input_ids=input_ids, attention_mask=attention_mask,num_beams = 10,no_repeat_ngram_size=1,max_length=200,\
return_dict_in_generate=True,output_scores=True,length_penalty=0.9)
print(len(outputs[0][0]))
18
print(len(outputs.scores))
194
print(outputs[0][0])
tensor([24948, 592, 777, 2712, 1048, 21517, 29871, 29906, 29968, 29896,
29929, 341, 29911, 3189, 1144, 29889, 2, 1],
device='cuda:0')
print(tokenizer.decode(outputs[0][0], skip_special_tokens=True))
'Tell me some things about NBA 2K19 MT Coins.'
```
I think the score size should be the same as the (output-input) size
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I think the output-input size is the same as the score size, also why the end of the token is 1 and then stop it.
### Expected behavior
How can I generate a good output for example I set a max_length =100, it should have the stop and some "," , not print 100 tokens or stop at BOS token.
Thanks you
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24715/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24715/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24714
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24714/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24714/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24714/events
|
https://github.com/huggingface/transformers/issues/24714
| 1,794,246,484 |
I_kwDOCUB6oc5q8gdU
| 24,714 |
find_unused_parameters is not passed from Trainer to Sagemaker DistributedModel
|
{
"login": "marcuscollins",
"id": 7783900,
"node_id": "MDQ6VXNlcjc3ODM5MDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7783900?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcuscollins",
"html_url": "https://github.com/marcuscollins",
"followers_url": "https://api.github.com/users/marcuscollins/followers",
"following_url": "https://api.github.com/users/marcuscollins/following{/other_user}",
"gists_url": "https://api.github.com/users/marcuscollins/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcuscollins/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcuscollins/subscriptions",
"organizations_url": "https://api.github.com/users/marcuscollins/orgs",
"repos_url": "https://api.github.com/users/marcuscollins/repos",
"events_url": "https://api.github.com/users/marcuscollins/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcuscollins/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Update: the workaround does not appear to solve the problem either. We need some way to pass this argument into SageMaker's version of parallelism (which I think still uses torch's DistributedDataParallel under the hood.)",
"Possibly @pacman100 might know about this? ",
"Are we sure this SageMaker class actually supports this argument with the same name as PyTorch?",
"I just wanted to add that this is a problem with [run.plm](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_plm.py) as well, but the example for [question-answer](https://github.com/huggingface/notebooks/blob/main/sagemaker/03_distributed_training_data_parallelism/sagemaker-notebook.ipynb) does work.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,692 | 1,692 |
NONE
| null |
### System Info
When trying to launch a model on SageMaker with the Huggingface Estimator and the `transformers.Trainer` class, I discovered the the trainer argument `ddp_find_unused_parameters` is not passed to the Sagemaker DistributedModel, see https://github.com/huggingface/transformers/blob/495729427045c7a58e040fa9bf6df81c16f54208/src/transformers/trainer.py#L1336
It is possible to work around this by wrapping the model with DistributedModel before passing it to the Trainer, so that I can pass any arguments I want, but really the trainer argument ought to just work.
I'm running fine tuning of a MaskedLM model, using a stripped down version of the example run_mlm.py script.
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Pass the argument `ddp_find_unused_parameters = True` to the Trainer class in a SageMaker Model Parallel environment, when training one of the ESM models (e.g., Facebook/esm2_t33_650M_UR50D) loaded using on sequence data, e.g., agemagician/uniref30.
1. In the training script, e.g., `run_mlm.py`, load a masked LM model, e.g. `AutoModelForMaskedLM("Facebook/esm2_t33_650M_UR50D")`
2. Use a dataset like agemagician/uniref30, by downloading the files to Sagemaker via its "data channels". Load it in the training script using `datasets.load_dataset` with the data_files argument.
Then use a Huggingface estimator in a SageMaker notebook, e.g. this example from AWS: https://github.com/PacktPublishing/Applied-Machine-Learning-and-High-Performance-Computing-on-AWS/blob/main/Chapter12/protein-secondary-structure-model-parallel.ipynb, the model training should crash indicating that there are unreduced parameters, telling you to use `find_unused_parameters`:
```
[1,mpirank:0,algo-1]<stderr>:RuntimeError: Expected to have finished reduction in the prior iteration before
[1,mpirank:0,algo-1]<stderr>:starting a new one. This error indicates that your module has parameters that
[1,mpirank:0,algo-1]<stderr>:were not used in producing loss. You can enable unused parameter detection by
[1,mpirank:0,algo-1]<stderr>:passing the keyword argument `find_unused_parameters=True` to
[1,mpirank:0,algo-1]<stderr>:`torch.nn.parallel.DistributedDataParallel`, and by
[1,mpirank:0,algo-1]<stderr>:making sure all `forward` function outputs participate in calculating loss.
[1,mpirank:0,algo-1]<stderr>:If you already have done the above, then the distributed data parallel module
[1,mpirank:0,algo-1]<stderr>:wasn't able to locate the output tensors in the return value of your module's
[1,mpirank:0,algo-1]<stderr>:`forward` function. Please include the loss function and the structure of the
[1,mpirank:0,algo-1]<stderr>:return value of `forward` of your module when reporting this issue (e.g. list,
[1,mpirank:0,algo-1]<stderr>:dict, iterable).
[1,mpirank:0,algo-1]<stderr>:Parameter indices which did not receive grad for rank 0: 1 132 133
[1,mpirank:0,algo-1]<stderr>: In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to
[1,mpirank:0,algo-1]<stderr>:either INFO or DETAIL to print out information about which particular parameters
[1,mpirank:0,algo-1]<stderr>:did not receive gradient on this rank [1,mpirank:0,algo-1]<stderr>:as part of this error
```
3. Now, using a sagemaker Huggingface estimator, pass the hyper-parameter `{"ddp_find_unused_parameters": True,...}` to the estimator. If using the script `run_mlm.py`, this will be parsed and passed to the `Trainer` class as part of `training_args`. However, you should still see the same error. This is because, as noted above, the argument is not passed to `smp.DistributedModel`.
### Expected behavior
The argument should be passed to `DistributedModel`, so that it can resolve the error noted above.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24714/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24714/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24713
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24713/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24713/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24713/events
|
https://github.com/huggingface/transformers/issues/24713
| 1,793,729,952 |
I_kwDOCUB6oc5q6iWg
| 24,713 |
Amazon Sagemaker - huggingface-textgeneration1-gpt-j-6b-fp16
|
{
"login": "Mrin7",
"id": 134509550,
"node_id": "U_kgDOCARz7g",
"avatar_url": "https://avatars.githubusercontent.com/u/134509550?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mrin7",
"html_url": "https://github.com/Mrin7",
"followers_url": "https://api.github.com/users/Mrin7/followers",
"following_url": "https://api.github.com/users/Mrin7/following{/other_user}",
"gists_url": "https://api.github.com/users/Mrin7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mrin7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mrin7/subscriptions",
"organizations_url": "https://api.github.com/users/Mrin7/orgs",
"repos_url": "https://api.github.com/users/Mrin7/repos",
"events_url": "https://api.github.com/users/Mrin7/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mrin7/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @Mrin7, thanks for raising an issue! \r\n\r\nWhen creating issues on github, please create a separate issue for each individual question. \r\n\r\nWith regards to your first question, the docs state that the model can fit into 16 GB of RAM for interference, however there may be other processes or objects which have memory requirements resulting in the total amount of GPU RAM needed being above 16 GB. Without knowing exactly what you're running it's not possible to know. \r\n\r\nFor questions on how to deploy on sagemaker, please refer to the docs: https://huggingface.co/docs/sagemaker/inference. If you still have questions, then it's best asked in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,692 | 1,692 |
NONE
| null |
### System Info
Running on Amazon sagemaker notebook.(https://us-west-2.console.aws.amazon.com/sagemaker/playground?region=us-west-2#/foundation-models/playground/prod-000000021)
ml.m5.xlarge | 4 vcpu | 16 GiB memory
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
(https://huggingface.co/docs/transformers/main/model_doc/gptj) The hugging face repo claims that the fp16 model should be able to run on 16 GB GPU memory for inference. I am using ml.m5.xlarge | 4 vcpu | 16 GiB memory in Amazon Sagemaker to deploy the model.
Why am I getting data load error in Amazon sagemeaker?
Can someone please share steps to deploy a model in sagemaker. Moreover how do we requisition ASW to allocate us larger resource, if models won't work on these instances.? It requires an organization for instance allocation, are individuals not able to test models on AWS sagemaker.
### Expected behavior
Error hosting endpoint jumpstart-example-huggingface-textgener-2023-07-07-13-55-56-291: Failed. Reason: Failed to extract model data archive from URL "s3://jumpstart-cache-prod-us-west-2/huggingface-infer/prepack/v1.1.2/infer-prepack-huggingface-textgeneration1-gpt-j-6b-fp16.tar.gz". The model data archive is too large. Please reduce the size of the model data archive or move to an instance type with more memory.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24713/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24713/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24712
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24712/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24712/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24712/events
|
https://github.com/huggingface/transformers/pull/24712
| 1,793,704,417 |
PR_kwDOCUB6oc5U7QV-
| 24,712 |
adding dynamic categorical feature option
|
{
"login": "guyko81",
"id": 10399767,
"node_id": "MDQ6VXNlcjEwMzk5NzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/10399767?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guyko81",
"html_url": "https://github.com/guyko81",
"followers_url": "https://api.github.com/users/guyko81/followers",
"following_url": "https://api.github.com/users/guyko81/following{/other_user}",
"gists_url": "https://api.github.com/users/guyko81/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guyko81/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guyko81/subscriptions",
"organizations_url": "https://api.github.com/users/guyko81/orgs",
"repos_url": "https://api.github.com/users/guyko81/repos",
"events_url": "https://api.github.com/users/guyko81/events{/privacy}",
"received_events_url": "https://api.github.com/users/guyko81/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"awesome @guyko81 will have a look! this will be great!\r\n",
"@kashif can you help me please?\r\nwhen I try to add the variables based on the example I struggle getting the dynamic_categorical_features split into past and future values.\r\nWhen I add it to the time features\r\n```\r\ndef create_instance_splitter(\r\n config: PretrainedConfig,\r\n mode: str,\r\n train_sampler: Optional[InstanceSampler] = None,\r\n validation_sampler: Optional[InstanceSampler] = None,\r\n) -> Transformation:\r\n assert mode in [\"train\", \"validation\", \"test\"]\r\n\r\n instance_sampler = {\r\n \"train\": train_sampler\r\n or ExpectedNumInstanceSampler(\r\n num_instances=1.0, min_future=config.prediction_length\r\n ),\r\n \"validation\": validation_sampler\r\n or ValidationSplitSampler(min_future=config.prediction_length),\r\n \"test\": TestSplitSampler(),\r\n }[mode]\r\n\r\n return InstanceSplitter(\r\n target_field=\"values\",\r\n is_pad_field=FieldName.IS_PAD,\r\n start_field=FieldName.START,\r\n forecast_start_field=FieldName.FORECAST_START,\r\n instance_sampler=instance_sampler,\r\n past_length=config.context_length + max(config.lags_sequence),\r\n future_length=config.prediction_length,\r\n time_series_fields=[\"time_features\", \"observed_mask\", \"dynamic_categorical_features\"],\r\n )\r\n```\r\nand use it like this:\r\n```\r\n if config.num_dynamic_categorical_features > 0:\r\n PREDICTION_INPUT_NAMES.append(\"past_dynamic_categorical_features\")\r\n PREDICTION_INPUT_NAMES.append(\"future_dynamic_categorical_features\")\r\n```\r\nI got a shape error:\r\n```\r\nRuntimeError: stack expects each tensor to be equal size, but got [89, 1568] at entry 0 and [161, 1568] at entry 1\r\n```\r\n\r\nBut when I try to simply add it like this:\r\n```\r\n if config.num_dynamic_categorical_features > 0:\r\n PREDICTION_INPUT_NAMES.append(\"dynamic_categorical_features\")\r\n```\r\nI got an error like this:\r\n```\r\nRuntimeError: stack expects each tensor to be equal size, but got [1568, 2] at entry 0 and [1566, 2] at entry 16\r\n```\r\n\r\nSo the second version goes longer, however when a time series is shorter (1566 long vs 1568) it throws an error. I'm just not familiar with gluons enough to feel how to create past and future dynamic_categorical_features.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,692 | 1,692 |
NONE
| null |
# What does this PR do?
Adding dynamic categorical feature to time_series_transformer
// Have not tested yet!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24712/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24712/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24712",
"html_url": "https://github.com/huggingface/transformers/pull/24712",
"diff_url": "https://github.com/huggingface/transformers/pull/24712.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24712.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24711
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24711/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24711/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24711/events
|
https://github.com/huggingface/transformers/issues/24711
| 1,793,539,074 |
I_kwDOCUB6oc5q5zwC
| 24,711 |
Initialize Flax model params on CPU
|
{
"login": "gianlucadetommaso",
"id": 32386694,
"node_id": "MDQ6VXNlcjMyMzg2Njk0",
"avatar_url": "https://avatars.githubusercontent.com/u/32386694?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gianlucadetommaso",
"html_url": "https://github.com/gianlucadetommaso",
"followers_url": "https://api.github.com/users/gianlucadetommaso/followers",
"following_url": "https://api.github.com/users/gianlucadetommaso/following{/other_user}",
"gists_url": "https://api.github.com/users/gianlucadetommaso/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gianlucadetommaso/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gianlucadetommaso/subscriptions",
"organizations_url": "https://api.github.com/users/gianlucadetommaso/orgs",
"repos_url": "https://api.github.com/users/gianlucadetommaso/repos",
"events_url": "https://api.github.com/users/gianlucadetommaso/events{/privacy}",
"received_events_url": "https://api.github.com/users/gianlucadetommaso/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
},
{
"id": 2934977194,
"node_id": "MDU6TGFiZWwyOTM0OTc3MTk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Flax",
"name": "Flax",
"color": "4862AD",
"default": false,
"description": ""
}
] |
open
| false | null |
[] |
[
"Related to this, the `init_weights` method should not initialize all random parameters on GPU when `params` are actually passed (see, for example, [Flax GPT-J](https://github.com/huggingface/transformers/blob/495729427045c7a58e040fa9bf6df81c16f54208/src/transformers/models/gptj/modeling_flax_gptj.py#L401)). This makes me go out-of-memory even if I pass all the parameters to the method, initialized on CPU using `_do_init=False`.",
"cc @sanchit-gandhi ",
"Hey @gianlucadetommaso!\r\n\r\nFor reference, the PR to add the `_do_init` flag was added in this PR: #16148. Feel free to have a read through on what the motivations behind this PR and design were. Think you'd find it interesting!\r\n\r\nI think it would be a nice design to have the params loaded on CPU by default. There's an open PR for this here: #15295. You're more than welcome to pick-up where Boris left off here and finish the PR! The comments detailing the proposed design are quite thorough, but feel free to ping me if you have any other questions or want to clarify something",
"@sanchit-gandhi thanks for the links! As soon as I have time, I can try and do this.\r\n\r\nBy the way, to make sure I am not just doing something wrong, it would help me a lot if you could have a look and comment on [this](https://github.com/google/jax/discussions/16659) discussion. It regards memory consumption of initializing a sharded state using pjit. I think you had a discussion related to it in [this](https://github.com/huggingface/transformers/issues/22224) thread before, thus it'd be great hearing your thoughts.",
"Awesome, sounds great! Had a look at the linked discussion - not entirely sure why we see this behaviour (think it's one for the JAX team to answer), but what you can do is use a few helper functions from the [T5x codebase](https://github.com/google-research/t5x/tree/main) to assist you here.\r\n\r\nYou can load your model into a T5x `Checkpointer` with `use_gda=True` (use global device arrays) on the CPU: https://github.com/huggingface/bloom-jax-inference/blob/2a04aa519d262729d54adef3d19d63879f81ea89/bloom_inference/generator.py#L86\r\n\r\nAnd then save this `Checkpointer` state to a Google Cloud bucket (use the built in save function).\r\n\r\nWhen you then come to loading your state, you can load each shard of your weights onto the mapped devices (so if shard 1 goes on device 1, it'll be loaded straight there, so you won't blow up your memory trying to load a sharded model onto your accelerator device):\r\nhttps://github.com/huggingface/bloom-jax-inference/blob/2a04aa519d262729d54adef3d19d63879f81ea89/bloom_inference/generator.py#L95",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hey @gianlucadetommaso - going to re-open this one since we have a draft PR in progress at https://github.com/huggingface/transformers/pull/15295\r\n\r\nWould you like to have a go at seeing the PR through to completion?",
"Hi @sanchit-gandhi, right now I would not have the time to drive this myself. Sorry.",
"No worries! Leaving this one open to the community :)"
] | 1,688 | 1,692 | null |
NONE
| null |
### Feature request
Currently, the `from_pretrained` method of Flax models automatically puts model parameters on a single GPU device, if available. For very large models, this is not great, as the model parameters may just not fit on GPU memory.
In contrast, when passing `_do_init=False` to `from_pretrained`, the parameters are returned on CPU, outside the model.
I would love to have a feature that allows me to initialize model parameters on the device I want - in this case, on CPU - but at the same time initialize the model parameters within the model. Right now I have to call `_do_init=False` to avoid out-of-memory, but this causes inconsistencies with my API.
The feature could be either implemented as just another type (if we detect a numpy type, we initialize on CPU; otherwise on GPU) or as an additional argument, e.g. `initialize_on_cpu: bool = False`.
### Motivation
Described above. Another reason is to be more consistent with the PyTorch behaviour, where parameters are initialized (as a generator) on CPU.
### Your contribution
If we agree on on the design, I am happy to add this myself.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24711/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24711/timeline
|
reopened
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24710
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24710/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24710/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24710/events
|
https://github.com/huggingface/transformers/issues/24710
| 1,793,493,012 |
I_kwDOCUB6oc5q5ogU
| 24,710 |
Inheritance issue with _LazyConfigMapping
|
{
"login": "ZhiyuanChen",
"id": 28757366,
"node_id": "MDQ6VXNlcjI4NzU3MzY2",
"avatar_url": "https://avatars.githubusercontent.com/u/28757366?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhiyuanChen",
"html_url": "https://github.com/ZhiyuanChen",
"followers_url": "https://api.github.com/users/ZhiyuanChen/followers",
"following_url": "https://api.github.com/users/ZhiyuanChen/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhiyuanChen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhiyuanChen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhiyuanChen/subscriptions",
"organizations_url": "https://api.github.com/users/ZhiyuanChen/orgs",
"repos_url": "https://api.github.com/users/ZhiyuanChen/repos",
"events_url": "https://api.github.com/users/ZhiyuanChen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhiyuanChen/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi, thank you for pointing out this.\r\n\r\nLooks like you are right. However, this is just a simple utility to make things easier internally and there is no need to over-engineering😅",
"> Hi, thank you for pointing out this.\r\n> \r\n> Looks like you are right. However, this is just a simple utility to make things easier internally and there is no need to over-engineering😅\r\n\r\nThank you for your quick reply. \r\n\r\nI noticed this I was trying to read the model list so that it will arise an error in case an invalid model name is specified. \r\nAs it takes quite some time to set up the environment and build dataset before building model and raise error. \r\n\r\nI wonder if there are better ways to validate arguments before? reading `CONFIG_MAPPING_NAMES` is probably not an good idea as user may register their own model. \r\n\r\nAlso, I noticed there are hundreds of lines in `CONFIG_MAPPING_NAMES`, which could be a bit redundant and has to be modified manually when new algorithms are added. May I try to add some code to find them and construct `CONFIG_MAPPING_NAMES` automatically? ",
"Hello!\r\n\r\n> I wonder if there are better ways to validate arguments before?\r\n\r\nIt's not clear to me what's your use case here. If you specify invalid model name, an error must be given. Do you mean a better error message instead of just a simple key error?\r\n\r\n\r\n> Also, I noticed there are hundreds of lines in CONFIG_MAPPING_NAMES, which could be a bit redundant and has to be modified manually when new algorithms are added.\r\n\r\nIt's actually not edited that frequently 😅 . No need to over engineering in this case (but see below comments too)\r\n\r\n> May I try to add some code to find them and construct CONFIG_MAPPING_NAMES automatically?\r\n\r\nThis list should be very explicit so we know what model types (config) are available in `transformers`. Of course we can try to detect the modules (or easier check the python files), but that could potentially gives wrong results too (and in that case, difficult to reason/figure out).\r\n\r\n",
"> It's not clear to me what's your use case here. If you specify invalid model name, an error must be given. Do you mean a better error message instead of just a simple key error?\r\n\r\nOur model needs some information from dataset to build, so we can only build model after built dataset.\r\n\r\n```python\r\ndataset = Dataset(*args, **kwargs)\r\nmodel = Model(pretrained=xxx, num_outputs=dataset.num_outputs)\r\n```\r\n\r\nFor some tasks, it takes hours to build a dataset. And hence take hours to fail. So, it's better to validate model before building.\r\n\r\n```python\r\nif xxx not in XXX_LIST:\r\n raise RuntimeError(\"Invalid model specified.\")\r\ndataset = Dataset(*args, **kwargs)\r\nmodel = Model(pretrained=xxx, num_outputs=dataset.num_outputs)\r\n```",
"Hi!\r\n\r\nWe need more specific info of the failing you encounter.\r\n\r\nI assume you are saying \r\n\r\n```\r\nmodel = some_hf_model_class.from_pretrained(pretrained_model_name_or_path=\"xxx\")\r\n```\r\nwhere `xxx` is a a model repo name on the Hub (or a local path).\r\n\r\nIn your example, you are passing a model type name ``CONFIG_MAPPING_NAMES .\r\n\r\nHowever, if `Model` is **your custom class** which can be initialized with a model type name, then you have to implement the argument check in your own codebase.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,692 | 1,692 |
NONE
| null |
### System Info
- `transformers` version: 4.30.2
- Platform: Linux-3.10.0-1160.90.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.10.11
- Huggingface_hub version: 0.16.2
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.0.post200 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.6.8 (gpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
### Who can help?
@ArthurZucker
@Narsil
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Not Applicable
### Expected behavior
`_LazyConfigMapping` and `_LazyLoadAllMappings` inherits from `OrderedDict`, but they do not use any feature of `OrderedDict`.
It's probably a good idea to merge `self._mapping` into `self` so that the inheritance is meaningful.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24710/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24710/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24709
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24709/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24709/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24709/events
|
https://github.com/huggingface/transformers/pull/24709
| 1,793,322,191 |
PR_kwDOCUB6oc5U58t3
| 24,709 |
Add `Kosmos-2` model
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"any updates?",
"WIP, but a bit slow pace",
"Hi @ArthurZucker Ready for you to take a review a again :-) \r\n\r\nWould be great if we can merge before Thursday 🙏 (if everything is good) as we want to have an announcement with the original Kosmos-2 authors/team. Thank you in advance!",
"I will transfer the checkpoint and update the repo id used in this PR - just before merge it.",
"Hello @ArthurZucker \r\n\r\nI have made all models having flat structures, so we can load models to any another one. Please see [this change](https://github.com/huggingface/transformers/pull/24709/commits/cf501abe1045535425cce3d335ab2db4c103940a#diff-149bd7fa539d3ee4998f0f0335d4f9234081289dcf30a61255dd390e670dce6d)\r\n\r\nUnfortunately, it still require an intermediate layer `Kosmos2LMWrapper` being `Kosmos2PretrainedModel`, so we can use `generate` from `Kosmos2ForCausalLM` and `Kosmos2ForConditionalGeneration`. (ugly I know).\r\n\r\nAs @NielsRogge said [here](https://github.com/huggingface/transformers/pull/24709#discussion_r1342456143), it's probably not worth the effort.\r\n\r\nI propose 3 approaches:\r\n\r\n- Keep previous version (not expose `Kosmos2TextModel` and `Kosmos2ForCausalLM` in the main `__init__`): **implicitly** discourage using them\r\n- Keep previous version, but rename `Kosmos2TextModel` and `Kosmos2ForCausalLM` to `Kosmos2DecoderWrapper` and `Kosmos2LMWrapper`: **explicitly** discourage using them\r\n- Keep the current version here (ugly but works in all case)\r\n\r\nLet me know your final decision.\r\n\r\nRegarding image processor, let's keep it as it is 🙏 \r\n\r\nOther comments are addressed.",
"Hi @ArthurZucker Let me know if the change of `src/transformers/models/kosmos2/modeling_kosmos2.py` in [the last commit](https://github.com/huggingface/transformers/pull/24709/commits/cf501abe1045535425cce3d335ab2db4c103940a#diff-149bd7fa539d3ee4998f0f0335d4f9234081289dcf30a61255dd390e670dce6d) is good or if you want to keep the previous version (see the above comment), if you get some bandwidth today.\r\n\r\n(It's not a big change - just about how layers are structured)\r\n\r\nThank you in advance.",
"Hi @ArthurZucker most comments addressed. Especially no more nested function defined.\r\n\r\n> def remove_special_fields(text):\r\n return re.sub(\"<.*?>\", \"\", text)\r\n\r\nI think one liner sometimes is good to clearly indicate what it does. If you feel strong, I could remove it and just use regex. But regex is not really good for readability.\r\n\r\n> seems like one KEY_MAPPING and using replace could be simpler and not require regex not escaping the dots!\r\n\r\nThe conversion works well and simple enough. If you don't mind, let's just keep as it is?\r\n\r\n> preprocess_text\r\n\r\nI renamed it to `preprocess_examples`\r\n\r\nCould you take another round of review, and let's meet tomorrow to discuss `_add_remove_spaces_around_tag_tokens`?\r\n",
"@ydshieh great work !!!! thanks again!"
] | 1,688 | 1,698 | 1,698 |
COLLABORATOR
| null |
# What does this PR do?
Add `KOSMOS-2` model.
- I decide not to expose `Kosmos2TextModel` and `Kosmos2VisionModel` to the main `__init__` file:
- as they are really only building blocks. Moreover, loading checkpoints of `Kosmos2ForConditionalGeneration` into those 2 models won't work.
- loading `Kosmos2ForConditionalGeneration` into `Kosmos2Model` works.
- (and therefore no corresponding tests for those 2 models)
TODO (follow-up PRs):
- [ ] add a checkpoint conversion script in a follow up PR. (It's there, I just need to clean up the mess code.)
- [ ] upload checkpoint to `microsoft` and change the used checkpoint repo id.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24709/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24709/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24709",
"html_url": "https://github.com/huggingface/transformers/pull/24709",
"diff_url": "https://github.com/huggingface/transformers/pull/24709.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24709.patch",
"merged_at": 1698669137000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24708
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24708/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24708/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24708/events
|
https://github.com/huggingface/transformers/issues/24708
| 1,793,215,516 |
I_kwDOCUB6oc5q4kwc
| 24,708 |
Resume from checkpoint on fused AdamW raises device errors
|
{
"login": "ideasbyjin",
"id": 35487240,
"node_id": "MDQ6VXNlcjM1NDg3MjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/35487240?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ideasbyjin",
"html_url": "https://github.com/ideasbyjin",
"followers_url": "https://api.github.com/users/ideasbyjin/followers",
"following_url": "https://api.github.com/users/ideasbyjin/following{/other_user}",
"gists_url": "https://api.github.com/users/ideasbyjin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ideasbyjin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ideasbyjin/subscriptions",
"organizations_url": "https://api.github.com/users/ideasbyjin/orgs",
"repos_url": "https://api.github.com/users/ideasbyjin/repos",
"events_url": "https://api.github.com/users/ideasbyjin/events{/privacy}",
"received_events_url": "https://api.github.com/users/ideasbyjin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @ideasbyjin, thanks for raising this issue. \r\n\r\nIn order for us to be able to help, we'll need a minimal code snippet for us to be able to reproduce the error.\r\n\r\nCould you provide us with some more information on the running environment: run `transformers-cli env` in the terminal and copy-paste the output? \r\n\r\nHave you run tried training and resuming from checkpoint with a different optimizer than `adamw_torch_fused`? Was it successful?",
"Hi @amyeroberts! Yep, if I use `adamw_torch` then it seems to train & resume perfectly OK, so I take it it's the fused implementation that's raising issues. \r\n\r\n```\r\n- `transformers` version: 4.30.1\r\n- Platform: Linux-5.15.0-1019-aws-x86_64-with-glibc2.35\r\n- Python version: 3.10.6\r\n- Huggingface_hub version: 0.15.1\r\n- Safetensors version: 0.3.1\r\n- PyTorch version (GPU?): 2.0.0 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: True\r\n- Using distributed or parallel set-up in script?: DDP\r\n```",
"Also, I can't provide a more comprehensive example as it's private at this stage, but the gist is that it's running the `T5ForConditionalGeneration` model. \r\n\r\nAgain, with standard `adamw_torch` it seems to train/resume with 0 issues. When running with `adamw_torch_fused` , it clearly detects the correct checkpoint and loads the optimizer state (see below) but I can't really pin down where there might be device discrepancies that's upsetting the optimizer\r\n\r\n```\r\nCurrently training with a batch size of: 256\r\n***** Running training *****\r\n Num examples = 14,805,069\r\n Num Epochs = 1\r\n Instantaneous batch size per device = 256\r\n Total train batch size (w. parallel, distributed & accumulation) = 256\r\n Gradient Accumulation steps = 1\r\n Total optimization steps = 100\r\n Number of trainable parameters = 8,311,296\r\n Continuing training from checkpoint, will skip to saved global_step\r\n Continuing training from epoch 0\r\n Continuing training from global step 20\r\n Will skip the first 0 epochs then the first 20 batches in the first epoch.\r\n 0%| | 0/100 [00:00<?, ?it/s]\r\n```",
"@ideasbyjin OK, interesting. Could you try updating the pytorch version? \r\n\r\nThere were some know issues with fused AdamW and fp16 - #22144 - which _should_ have been resolved in 2.0.1.",
"Thanks @amyeroberts , no avail on using PyTorch 2.0.1 though.\r\n\r\n```\r\n- `transformers` version: 4.30.1\r\n- Platform: Linux-5.15.0-1019-aws-x86_64-with-glibc2.31\r\n- Python version: 3.10.12\r\n- Huggingface_hub version: 0.16.3\r\n- Safetensors version: 0.3.1\r\n- PyTorch version (GPU?): 2.0.1+cu118 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: True\r\n- Using distributed or parallel set-up in script?: DDP\r\n```\r\n\r\n```\r\nWill skip the first 0 epochs then the first 20 batches in the first epoch.\r\n 0%| | 0/100 [00:00<?, ?it/s]Traceback (most recent call last):\r\n\r\n trainer.train(resume_from_checkpoint=True)\r\n\r\n File \"/.../.conda/envs/pytorch2.0.1/lib/python3.10/site-packages/transformers/trainer.py\", line 1645, in train\r\n return inner_training_loop(\r\n\r\n File \"/.../.conda/envs/pytorch2.0.1/lib/python3.10/site-packages/transformers/trainer.py\", line 2007, in _inner_training_loop\r\n self.optimizer.step()\r\n\r\n File \"/.../.conda/envs/pytorch2.0.1/lib/python3.10/site-packages/accelerate/optimizer.py\", line 140, in step\r\n self.optimizer.step(closure)\r\n\r\n File \"/.../conda/envs/pytorch2.0.1/lib/python3.10/site-packages/torch/optim/lr_scheduler.py\", line 69, in wrapper\r\n return wrapped(*args, **kwargs)\r\n\r\n File \"/.../.conda/envs/pytorch2.0.1/lib/python3.10/site-packages/torch/optim/optimizer.py\", line 280, in wrapper\r\n out = func(*args, **kwargs)\r\n\r\n File \"/.../.conda/envs/pytorch2.0.1/lib/python3.10/site-packages/torch/optim/optimizer.py\", line 33, in _use_grad\r\n ret = func(self, *args, **kwargs)\r\n\r\n File \"/.../.conda/envs/pytorch2.0.1/lib/python3.10/site-packages/torch/optim/adamw.py\", line 171, in step\r\n adamw(\r\n\r\n File \"/.../.conda/envs/pytorch2.0.1/lib/python3.10/site-packages/torch/optim/adamw.py\", line 321, in adamw\r\n func(\r\n\r\n File \"/.../.conda/envs/pytorch2.0.1/lib/python3.10/site-packages/torch/optim/adamw.py\", line 615, in _fused_adamw\r\n torch._fused_adamw_(\r\n\r\nRuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument state_steps in method wrapper_CUDA___fused_adamw_)\r\n```",
"Right, I think I discovered the issue, and it's a bit to do with both PyTorch and HF. I'll explain:\r\n\r\nThis first starts with how the HF Trainer reloads an optimizer from a checkpoint,\r\nhttps://github.com/huggingface/transformers/blob/v4.30.1/src/transformers/trainer.py#L2542\r\n\r\nSince the optimizer's states are loaded onto `cpu` (in the case of a single worker, even with multiple GPUs I think?) then when you come to spinning up the fused AdamW optimizer,\r\nhttps://github.com/pytorch/pytorch/blob/v2.0.1/torch/optim/adamw.py#L614\r\n\r\n`device_state_dict` has its Tensors in `cpu` even though all others are in `cuda`, so it raises the error!\r\n\r\nI found that either deliberately loading the optimizer states into `cuda` from the `Trainer`, or modifying the `torch.optim.AdamW` code to shift everything to `cuda` did the trick, though I feel like the fix on HF's end is a bit more elegant. \r\n\r\nPerhaps there's argument for changing the `map_location` of the optimizer states, especially in a scenario where we have multiple GPUs on one worker? I'll leave this to your judgment on how to navigate/fix though."
] | 1,688 | 1,688 | 1,688 |
NONE
| null |
### System Info
Hi, I'm trying to resume a very minimal T5 model after some pre-training.
* transformers==4.30.1
* accelerate==0.20.3
* datasets==2.12.0
I'm using `Trainer`'s built-in `resume_from_checkpoint` argument, and I get the following error message:
```RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument state_steps in method wrapper_CUDA___fused_adamw_)```
The model trains fine, so I don't think there's anything wrong with the model training code per se.
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
model = T5ForConditionalGeneration(hf_model_config)
tokenizer = load_tokenizer(tokenizer_path, max_length=max_position_embeddings)
training_args = TrainingArguments(
...
bf16=True,
optim='adamw_torch_fused',
...
)
trainer = Trainer(
model=model,
...
)
trainer.train(resume_from_checkpoint=True)
```
### Expected behavior
Resuming from my latest checkpoint
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24708/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24708/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24707
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24707/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24707/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24707/events
|
https://github.com/huggingface/transformers/issues/24707
| 1,792,991,372 |
I_kwDOCUB6oc5q3uCM
| 24,707 |
decoder_kwargs are not passed over to AutomaticSpeechRecognitionPipeline.tokenizer.decode
|
{
"login": "devxpy",
"id": 19492893,
"node_id": "MDQ6VXNlcjE5NDkyODkz",
"avatar_url": "https://avatars.githubusercontent.com/u/19492893?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/devxpy",
"html_url": "https://github.com/devxpy",
"followers_url": "https://api.github.com/users/devxpy/followers",
"following_url": "https://api.github.com/users/devxpy/following{/other_user}",
"gists_url": "https://api.github.com/users/devxpy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/devxpy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/devxpy/subscriptions",
"organizations_url": "https://api.github.com/users/devxpy/orgs",
"repos_url": "https://api.github.com/users/devxpy/repos",
"events_url": "https://api.github.com/users/devxpy/events{/privacy}",
"received_events_url": "https://api.github.com/users/devxpy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"From https://github.com/huggingface/transformers/blob/fb78769b9c053876ed7ae152ee995b0439a4462a/src/transformers/pipelines/automatic_speech_recognition.py#L549-L552\r\n\r\nThat `decoder_kwargs` is for an instance of `BeamSearchDecoderCTC` and not a tokenizer.\r\n\r\nBut cc @ArthurZucker to see if he has more words to say.\r\n",
"I guess we can have `tokenizer_decoder_kwargs` too :)",
"Yep, pipeline does not support `**tokenizer_kwargs` yet. This has been talked about in #22995 and #12039. \r\nI am in for `tokenizer_kwargs`, not `tokenizer_decoder_kwargs`. We need to as less specific as possible. \r\n\r\nA con is that having lots of kwargs is hard to maintain and we are trying to get away from this. The tokenizer class can save the `_init_kwargs` which contains the last parameters with which the tokenizer was called. You can set them and pass the tokenizer to the pipeline ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,692 | 1,692 |
NONE
| null |
### Feature request
The `postprocess()` function here
https://github.com/huggingface/transformers/blob/fb78769b9c053876ed7ae152ee995b0439a4462a/src/transformers/pipelines/automatic_speech_recognition.py#L492-L494
should pass down the `decoder_kwargs` it receives down to the decoder
https://github.com/huggingface/transformers/blob/fb78769b9c053876ed7ae152ee995b0439a4462a/src/transformers/pipelines/automatic_speech_recognition.py#L563
### Motivation
Sometimes the decoder will output special tokens - https://github.com/huggingface/transformers/issues/15275 - and there's no way to pass `skip_special_tokens=True` to the decoder
https://github.com/huggingface/transformers/blob/fb78769b9c053876ed7ae152ee995b0439a4462a/src/transformers/models/wav2vec2/tokenization_wav2vec2.py#L407-L416
### Your contribution
I can submit a PR
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24707/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24707/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24706
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24706/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24706/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24706/events
|
https://github.com/huggingface/transformers/pull/24706
| 1,792,988,702 |
PR_kwDOCUB6oc5U4zuX
| 24,706 |
Fix flaky `test_for_warning_if_padding_and_no_attention_mask`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,688 | 1,688 | 1,688 |
COLLABORATOR
| null |
# What does this PR do?
This tests #24510. It fails a few times (on my PRs, and once in someone's PR), and now it fails on the latest daily CI.
This test is flaky because it test the functionality of `warn_if_padding_and_no_attention_mask` which uses `logger.warning_once(warn_string)` - this uses `@functools.lru_cache(None)`.
If there is any test that triggers this warning before `test_for_warning_if_padding_and_no_attention_mask`, the cache is produced, and we won't get the expected warning in `test_for_warning_if_padding_and_no_attention_mask` then it fails.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24706/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24706/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24706",
"html_url": "https://github.com/huggingface/transformers/pull/24706",
"diff_url": "https://github.com/huggingface/transformers/pull/24706.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24706.patch",
"merged_at": 1688723722000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24705
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24705/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24705/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24705/events
|
https://github.com/huggingface/transformers/issues/24705
| 1,792,801,423 |
I_kwDOCUB6oc5q2_qP
| 24,705 |
elif self.fsdp is not None and self.args.fsdp_config["xla"]:
|
{
"login": "duanzhenyu001",
"id": 103398099,
"node_id": "U_kgDOBim60w",
"avatar_url": "https://avatars.githubusercontent.com/u/103398099?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/duanzhenyu001",
"html_url": "https://github.com/duanzhenyu001",
"followers_url": "https://api.github.com/users/duanzhenyu001/followers",
"following_url": "https://api.github.com/users/duanzhenyu001/following{/other_user}",
"gists_url": "https://api.github.com/users/duanzhenyu001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/duanzhenyu001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/duanzhenyu001/subscriptions",
"organizations_url": "https://api.github.com/users/duanzhenyu001/orgs",
"repos_url": "https://api.github.com/users/duanzhenyu001/repos",
"events_url": "https://api.github.com/users/duanzhenyu001/events{/privacy}",
"received_events_url": "https://api.github.com/users/duanzhenyu001/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 5616426447,
"node_id": "LA_kwDOCUB6oc8AAAABTsPdzw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/solved",
"name": "solved",
"color": "B1D6DC",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] |
[
"Hi @duanzhenyu001, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.\r\n\r\nThat being said, checking the docs it's possible to see that [XLA supports FDSP](https://huggingface.co/docs/transformers/main/main_classes/trainer#pytorchxla-fully-sharded-data-parallel), but [FDSP can be used separately](https://huggingface.co/docs/transformers/main/main_classes/trainer#pytorch-fully-sharded-data-parallel). If you wish to use XLA with FDSP, it's necessary to install torch-xla >= 2.0. ",
"@amyeroberts thanks a lot for your reply. when I use huggingface trainer fsdp mode finetune a 6B model,I got oom even 8 V100-32G gpu used. I didn't find where trainer shard my model when I use fsdp without xla. could you please point it out in the trainer code. thanks very much. ",
"@amyeroberts and here is my TrainingArguments when I finetune glm2 a 6B model.\r\nTrainingArguments(\r\n_n_gpu=1,\r\nadafactor=False,\r\nadam_beta1=0.9,\r\nadam_beta2=0.999,\r\nadam_epsilon=1e-08,\r\nauto_find_batch_size=False,\r\nbf16=False,\r\nbf16_full_eval=False,\r\ndata_seed=None,\r\ndataloader_drop_last=False,\r\ndataloader_num_workers=0,\r\ndataloader_pin_memory=True,\r\nddp_backend=None,\r\nddp_bucket_cap_mb=None,\r\nddp_find_unused_parameters=None,\r\nddp_timeout=1800,\r\ndebug=[],\r\ndeepspeed=None,\r\ndisable_tqdm=False,\r\ndo_eval=True,\r\ndo_predict=False,\r\ndo_train=True,\r\neval_accumulation_steps=64,\r\neval_delay=0,\r\neval_steps=None,\r\nevaluation_strategy=epoch,\r\nfp16=True,\r\nfp16_backend=auto,\r\nfp16_full_eval=False,\r\nfp16_opt_level=O1,\r\nfsdp=full_shard auto_wrap,\r\nfsdp_config={'fsdp_forward_prefetch': True, 'fsdp_sync_module_states': True, 'fsdp_use_orig_params': True, 'xla': False, 'fsdp_transformer_layer_cls_to_wrap': 'GLMBlock'},\r\nfsdp_min_num_params=0,\r\nfsdp_transformer_layer_cls_to_wrap=GLMBlock,\r\nfull_determinism=False,\r\ngradient_accumulation_steps=128,\r\ngradient_checkpointing=False,\r\ngreater_is_better=None,\r\ngroup_by_length=False,\r\nhalf_precision_backend=auto,\r\nhub_model_id=None,\r\nhub_private_repo=False,\r\nhub_strategy=every_save,\r\nhub_token=<HUB_TOKEN>,\r\nignore_data_skip=False,\r\ninclude_inputs_for_metrics=False,\r\njit_mode_eval=False,\r\nlabel_names=None,\r\nlabel_smoothing_factor=0.0,\r\nlearning_rate=2e-05,\r\nlength_column_name=length,\r\nload_best_model_at_end=False,\r\nlocal_rank=1,\r\nlog_level=debug,\r\nlog_level_replica=warning,\r\nlog_on_each_node=True,\r\nlogging_dir=/mnt/bn/mods-llm/duanzhenyu/llm/ds_llm_sft/data/models/finetune/runs/Jul14_06-57-19_mlxlab25ta1apm6482c909-20230609063905-cqjk4l-u3z93n-worker,\r\nlogging_first_step=False,\r\nlogging_nan_inf_filter=True,\r\nlogging_steps=1,\r\nlogging_strategy=steps,\r\nlr_scheduler_type=cosine,\r\nmax_grad_norm=None,\r\nmax_steps=-1,\r\nmetric_for_best_model=None,\r\nmp_parameters=,\r\nno_cuda=False,\r\nnum_train_epochs=2,\r\noptim=adamw_torch,\r\noptim_args=None,\r\noutput_dir=/mnt/bn/mods-llm/duanzhenyu/llm/ds_llm_sft/data/models/finetune,\r\noverwrite_output_dir=False,\r\npast_index=-1,\r\nper_device_eval_batch_size=1,\r\nper_device_train_batch_size=1,\r\nprediction_loss_only=False,\r\npush_to_hub=False,\r\npush_to_hub_model_id=None,\r\npush_to_hub_organization=None,\r\npush_to_hub_token=<PUSH_TO_HUB_TOKEN>,\r\nray_scope=last,\r\nremove_unused_columns=True,\r\nreport_to=['tensorboard'],\r\nresume_from_checkpoint=None,\r\nrun_name=/mnt/bn/mods-llm/duanzhenyu/llm/ds_llm_sft/data/models/finetune,\r\nsave_on_each_node=False,\r\nsave_safetensors=False,\r\nsave_steps=50,\r\nsave_strategy=steps,\r\nsave_total_limit=1,\r\nseed=42,\r\nsharded_ddp=[],\r\nskip_memory_metrics=True,\r\ntf32=None,\r\ntorch_compile=False,\r\ntorch_compile_backend=None,\r\ntorch_compile_mode=None,\r\ntorchdynamo=None,\r\ntpu_metrics_debug=False,\r\ntpu_num_cores=None,\r\nuse_ipex=False,\r\nuse_legacy_prediction_loop=False,\r\nuse_mps_device=False,\r\nwarmup_ratio=0.0,\r\nwarmup_steps=1000,\r\nweight_decay=0.1,\r\nxpu_backend=None,\r\n)",
"@amyeroberts and at function \"create_accelerator_and_postprocess\" , \r\n self.accelerator = Accelerator(\r\n deepspeed_plugin=self.args.deepspeed_plugin,\r\n gradient_accumulation_steps=self.args.gradient_accumulation_steps,\r\n )\r\nself.accelerator always be inited, if i don't set os env ACCELERATE_USE_FSDP to be true, \"self.is_fsdp_enabled = getattr(self.accelerator.state, \"fsdp_plugin\", None) is not None\" self.is_fsdp_enabled will always be false, is this what you realy wanted? and I'm confused the intention of this function.",
"cc @pacman100 who will be able to comment on the OOM memory issues and whether this is expected. \r\n\r\nPlease note that recently there was a large update with Trainer, and it now uses accelerate in the background. In the issue information, I see that you're using v4.24. I suggest updating to a more recent release to benefit from this update and any bug resolutions in between. ",
"> \r\n\r\nI'm now using v4.30.2,but still get oom memory errors when training on 8 V100-32g GPUs. @pacman100 can some one give any suggestions? thanks a lot",
"Hello @duanzhenyu001, how are you launching the training script and a minimal training script for deep dive is required",
"The following works on 4 A100 80GB GPUs with GPT-j (6B model), although the entire VRAM is occupied on each:\r\n\r\n```\r\ncd transformers/examples/pytorch/language-modeling\r\n\r\ntorchrun --nnodes 1 --nproc-per-node 4 run_clm.py --model_name_or_path EleutherAI/gpt-j-6b --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --do_train --do_eval --output_dir /tmp/test-clm --gradient_accumulation_steps 8 --overwrite_output_dir --fsdp \"full_shard auto_wrap\" --fsdp_transformer_layer_cls_to_wrap \"GPTJBlock\" --bf16\r\n```\r\n\r\nThe reason for OOM at your end is plausible due to large seq lengths such as >=1024. In such cases, gradient/activation checkpointing is recommended with `--gradient_checkpointing`.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,692 | 1,692 |
NONE
| null |
### System Info
- `transformers` version: 4.24.0
- Platform: Linux-5.4.143.bsk.8-amd64-x86_64-with-glibc2.28
- Python version: 3.10.9
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
in trainer if self.args.fsdp_config["xla"] is true,trainer will wrap model layers,is this mean if i want to use fsdp to shard model,I must install torch-xla>2.0?
### Expected behavior
want to know if i must install torch-xla for fspd trainning
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24705/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24705/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24704
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24704/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24704/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24704/events
|
https://github.com/huggingface/transformers/issues/24704
| 1,792,801,016 |
I_kwDOCUB6oc5q2_j4
| 24,704 |
bug for from_pretrained method with ignore_mismatched_sizes=True
|
{
"login": "Hannibal046",
"id": 38466901,
"node_id": "MDQ6VXNlcjM4NDY2OTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/38466901?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hannibal046",
"html_url": "https://github.com/Hannibal046",
"followers_url": "https://api.github.com/users/Hannibal046/followers",
"following_url": "https://api.github.com/users/Hannibal046/following{/other_user}",
"gists_url": "https://api.github.com/users/Hannibal046/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hannibal046/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hannibal046/subscriptions",
"organizations_url": "https://api.github.com/users/Hannibal046/orgs",
"repos_url": "https://api.github.com/users/Hannibal046/repos",
"events_url": "https://api.github.com/users/Hannibal046/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hannibal046/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey 🤗 ! Thanks for reporting, this indeed a bug! I tracked this down to #24505 (this [commit](https://github.com/huggingface/transformers/commit/8e5d1619b3e57367701d74647e87b95f8dba5409)). Will probably let @sgugger handle, I don't have bandwidth to solve it right now! ",
"No, this has no link to #24505, the code sample also fails on v4.30.2 and looking at the fix, it never worked before. The PR linked above should fix it.",
"I tested with `pip install -q transformers==4.30.2` and it worked fine ( no error but maybe no resize?), same for previous versions. When checking out commit after comit, this was the failing one but I think I missed something! "
] | 1,688 | 1,689 | 1,689 |
NONE
| null |
### System Info
- `transformers` version: 4.31.0.dev0
- Platform: Linux-5.15.0-1040-azure-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker @sgugg
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When setting `ignore_mismatched_sizes=True` in `from_pretrained` method, it would give errors.
```python
from transformers import AutoModelForCausalLM
model_type = "facebook/opt-6.7b"
model = AutoModelForCausalLM.from_pretrained(model_type,max_position_embeddings=4096,
ignore_mismatched_sizes=True)
```
The tracebacks are:
<img width="987" alt="image" src="https://github.com/huggingface/transformers/assets/38466901/ce67a66b-ed9a-42fe-871d-eb1ffe348da0">
However, this wouldn't happen in smaller model. Simply changing `model_type` to `facebook/opt-2.7b` is fine. And this bug is not specific for `OptModel`, `EleutherAI/pythia-6.9b` would also trigger error.
The trackback for `EleutherAI/pythia-6.9b` is:
<img width="882" alt="image" src="https://github.com/huggingface/transformers/assets/38466901/7b3b3bea-0a03-4e5c-aef1-b386bff11be3">
### Expected behavior
loading weights successfully without error.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24704/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24704/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24703
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24703/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24703/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24703/events
|
https://github.com/huggingface/transformers/pull/24703
| 1,792,723,158 |
PR_kwDOCUB6oc5U35-2
| 24,703 |
Suppress warnings from LUKE for unexpected keys
|
{
"login": "ryokan0123",
"id": 17979572,
"node_id": "MDQ6VXNlcjE3OTc5NTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/17979572?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ryokan0123",
"html_url": "https://github.com/ryokan0123",
"followers_url": "https://api.github.com/users/ryokan0123/followers",
"following_url": "https://api.github.com/users/ryokan0123/following{/other_user}",
"gists_url": "https://api.github.com/users/ryokan0123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ryokan0123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ryokan0123/subscriptions",
"organizations_url": "https://api.github.com/users/ryokan0123/orgs",
"repos_url": "https://api.github.com/users/ryokan0123/repos",
"events_url": "https://api.github.com/users/ryokan0123/events{/privacy}",
"received_events_url": "https://api.github.com/users/ryokan0123/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I believe this should not be done this way. These keys should be used only if the default behavior in the modeling code will have different keys than the canonical (original) checkpoints on the Hub.\r\n\r\nBut before further discussion, let's check one thing first:\r\n\r\nthe config in `studio-ousia/mluke-base-lite` has \r\n\r\n```bash\r\nuse_entity_aware_attention\": true,\r\n```\r\n\r\nAre you sure this is the checkpoint that causes confusion ..?",
"~~My wording above is not precise. I will update that comment.~~\r\n\r\nThese keys should be used only if:\r\n\r\n- a model loading from a checkpoint that is saved with `from_pretrained` (without changing the config during loading) will have some unexpected weight keys.\r\n- a HF checkpoint is created that has some extra keys (in order to respect the original non-HF checkpoint) which is not really used in the model (and the HF modeling code is written to avoid having such un-used keys)\r\n\r\n",
"I have run \r\n\r\n```python\r\nfrom transformers import AutoModel\r\nmodel = transformers.AutoModel.from_pretrained(\"studio-ousia/mluke-base-lite\")\r\n```\r\nbut didn't receive any warning.",
"Thanks @ydshieh for taking a look for the PR!\r\n\r\n> Are you sure this is the checkpoint that causes confusion ..?\r\n\r\nWhen I look at the latest version of the config on the following models, I find `\"use_entity_aware_attention\": false`.\r\nhttps://huggingface.co/studio-ousia/mluke-base-lite/blob/3775c9b1470636e206c38cbb1b964ba883421164/config.json#L33\r\n\r\n> but didn't receive any warning.\r\n\r\nThe following Google Colabo notebook shows the warning.\r\nhttps://colab.research.google.com/drive/1kYN3eGhx5tzEMnGkUz2jPsdmFyEBwxFA?usp=sharing\r\nProbably it depends on some logging settings given by the environment, but it does show the warnings in some cases.\r\n\r\n",
"> These keys should be used only if:\r\n> - a model loading from a checkpoint that is saved with from_pretrained (without changing the config during loading) will have some unexpected weight keys.\r\n> - a HF checkpoint is created that has some extra keys (in order to respect the original non-HF checkpoint) which is not really used in the model (and the HF modeling code is written to avoid having such un-used keys)\r\n\r\nI believe that this PR is similar to the second point mentioned above.\r\n\r\nThe HF checkpoint is derived from the original checkpoint generated by the [original repository](https://github.com/studio-ousia/luke). The checkpoint contains additional keys (`luke.encoder.layer.*.attention.self.*_query.*`), which are only utilized when the entity-aware attention mechanism is enabled during fine-tuning.\r\nEntity-aware attention is an optional feature and is disabled by default, because that is the setting used in the [original paper](https://aclanthology.org/2022.acl-long.505/).\r\n\r\nI would like to address the problem of the confusing and overwhelming warnings even when it is the default behavior.\r\nI would appreciate your further elaboration on why this cannot be addressed using `_keys_to_ignore_on_load_unexpected`, or any alternative solutions you might have in mind.",
"OK I see. We have to use `LukeForMaskedLM` or `AutoModelForMaskedLM` to see the warning.",
"We can't change these kinds of keys due to a Hub model repo. author uploading problematic weights/config file.\r\nYou can ask the author to correct (cleanup) the model weights and re-upload.\r\n\r\nIf we change in the way like done in this PR, we won't have any warning when a real problem occurs, and the bugs won't be detected.",
"> The HF checkpoint is derived from the original checkpoint generated by the [original repository](https://github.com/studio-ousia/luke). The checkpoint contains additional keys (luke.encoder.layer.*.attention.self.*_query.*), which are only utilized when the entity-aware attention mechanism is enabled during fine-tuning.\r\n\r\nI didn't check the original repo. (which is not me adding that model into `transformers`). But the Hub repo like [luke-base](https://huggingface.co/studio-ousia/luke-base/blob/main/config.json) has \r\n```bash\r\nuse_entity_aware_attention\": true,\r\n```\r\nAlso, the default value in `LukeConfig.__init__` is `True.`",
"Let me share more context on this problem.\r\n\r\nThe weights uploaded on the HF repo are supposed to work either when `use_entity_aware_attention` is `True` or `False` and the config files just specify the default value.\r\nThe warnings are raised as expected currently, but I want to suppress the warnings as the correct behavior.\r\n\r\nI am from the same group of the author of LukeModel and the HF weights are uploaded by me, so I am sure that it follows the intention of the original model.\r\n\r\nIn summary, when some weights should be ignored as the correct behavior, what is the right way to handle that?",
"> If we change in the way like done in this PR, we won't have any warning when a real problem occurs, and the bugs won't be detected.\r\n\r\nI understand that this is a risk, but couldn't that be mitigated by specifying the correct regex?",
"The problem here is the config and the model weight on the hub has inconsistent values. If the model is created with that value set to false, there would not have those extra keys in the model.\r\n\r\nIt is unclear how the Hub author ends up with sich inconsistency. The fix should happen there.\r\n\r\nHope this explanation makes things clear.\r\n\r\nBut thank you for your willingness to fix and help transformers better ❤️",
"I believe there is still some misunderstanding.\r\n\r\n> The problem here is the config and the model weight on the hub has inconsistent values.\r\n\r\nThe inconsistency is intended as having optional extra weights is a part of the model features.\r\nUsers can either choose to use the extra weights or not.\r\n\r\n> If the model is created with that value set to false, there would not have those extra keys in the model.\r\n\r\nThose extra keys (weights) are optional.\r\nEven though the model has `use_entity_aware_attention=False` by default, we'd like to give users an option to enable `use_entity_aware_attention=True` to check the effect.",
"To be clearer, the extra weights are in this part.\r\nhttps://github.com/huggingface/transformers/blob/abaca9f9432a84cfaa95531de4c72334f38a42f2/src/transformers/models/luke/modeling_luke.py#L523-L526\r\n\r\nThese weights are NOT used in pretraining time, but can be optionally introduced at the fine-tuning time.\r\nFor users to be able to freely choose between the options, the weights should include the extra weights but it causes unnecessary warnings when `use_entity_aware_attention = False`...",
"I apologize for any confusion caused by my previous explanation, but I would like to request @NielsRogge's opinion on how to handle these warnings. He helped introduce LUKE in transformers.",
"> These weights are NOT used in pretraining time,\r\n\r\nSo those weights are not even trained during pretraining time ..? I am a bit confused here. Or it's trained for Luke but not mLuke?\r\n\r\n> These weights are NOT used in pretraining time, but can be optionally introduced at the fine-tuning time.\r\nFor users to be able to freely choose between the options, the weights should include the extra weights\r\n\r\nIn this case, the original model weights (the checkpoint on the Hub repo `studio-ousia/mluke-base-lite`) should not include those extra weights (which is the opposite currently), and config should have `use_entity_aware_attention=False` (which is currently).\r\n\r\n- **When a user want to fine-tune with the option** with `use_entity_aware_attention`, it can load the checkpoint with this set to `True` **at runtime**: then the model will have these extra weights at runtime (but with different warning saying some weights are randomly initialized).\r\n\r\nI am wondering what prevents you to remove those extra weights on `studio-ousia/mluke-base-lite` if it is never used.\r\n",
"Thank you for your patience.\r\nI know the model is doing something unusual...\r\n\r\n#### What is entity-aware attention?\r\nLUKE and mLUKE take word tokens as well as entity tokens.\r\nAt pretraining time, they undergo the computation of self attention (token-to-token attention) equally.\r\n\r\nAt fine-tuning time, we can optionally add entity-aware attention.\r\nThis mechanism uses different attention weights for word-to-word, word-to-entity, entity-to-word, and entity-to-entity tokens.\r\nThe weights for these different types of attention are initialized by **copying the token-to-token attention obtained during pretraining**.\r\nThis is done by the following lines of the conversion script.\r\nhttps://github.com/huggingface/transformers/blob/abaca9f9432a84cfaa95531de4c72334f38a42f2/src/transformers/models/luke/convert_luke_original_pytorch_checkpoint_to_pytorch.py#L61-L67\r\n\r\nSo, the checkpoints include these copied weights regardless of whether users enable entity-aware attention at fine-tuning time.\r\nAlso this is the reason why we do not want to initialize the new weights randomly.\r\n\r\n\r\n> So those weights are not even trained during pretraining time ..? I am a bit confused here. Or it's trained for Luke but not mLuke?\r\n\r\nBoth LUKE and mLUKE are pretrained without entity-aware attention, but they can still use entity-aware attention by initializing new weights with the corresponding pretrained ones.\r\n\r\n\r\n#### Why is the default value of `use_entity_aware_attention` different in LUKE and mLUKE? \r\n\r\nWe set the default value to be consistent with the original papers that proposed each model.\r\n[LUKE](https://aclanthology.org/2020.emnlp-main.523/) uses entity-aware attention because it performs better in monolingual settings, but [mLUKE](https://aclanthology.org/2022.acl-long.505/) does not as it did not give consistent gains in cross-lingual tasks.\r\n\r\n\r\n> I am wondering what prevents you to remove those extra weights on studio-ousia/mluke-base-lite if it is never used.\r\n\r\nAlthough we set the default value of `use_entity_aware_attention` to be `False` in `studio-ousia/mluke-base-lite`, we still want to allow users to try if entity-aware attention is useful in their own settings.\r\n\r\nHowever as reported in the PR description, some users find the warning confusing...\r\nSo we would like to remove this confusion.\r\n\r\nPerhaps there are alternative approaches to achieve this goal other than setting `_keys_to_ignore_on_load_unexpected` such as \r\n- redefining the behavior of the initialization of `LukeModel` so that it copies the token-to-token attention weights with the weights of entity-aware attention missing in the checkpoint but `use_entity_aware_attention=True`. Then we can remove the copied weights from the checkpoints.\r\n- adding more detailed warning messages on what the ignored weights mean.\r\n\r\nI would greatly appreciate any advice!",
"Hi @ryokan0123 . Thank you for the detailed information. Looking the following 3 points you mentioned:\r\n\r\nTo make sure, is those extra weights in `studio-ousia/mluke-base-lite` are neither pretrained (yes as you mentioned) nor fine-tuned. If this is the case: \r\n\r\n\r\n\r\n> 1. Both LUKE and mLUKE are pretrained without entity-aware attention\r\n\r\n> 2. by initializing new weights with the corresponding pretrained ones.\r\n\r\n> 3. (Although we set the default value of use_entity_aware_attention to be False ...) we still want to allow users to try if entity-aware attention is useful in their own settings.\r\n\r\nwhat you described could be easily achieved (point 3.) for a user to just specify `config.use_entity_aware_attention` at runtime - **this doesn't require the weights to be in the checkpoint**. It will just show an warning\r\n\r\n```\r\nSome weights of were not initialized from the model checkpoint at ... {pretrained_model_name_or_path} and are newly initialized ...\r\n```\r\nAnd this (different) warning make sense and should be kept.\r\n\r\nLet me know if you have further question to the above suggested way to (optionally) use/enable non-trained `entity_aware_attention` weights.\r\n",
"Yes, I know that is possible.\r\nHowever, the important point is that **those new weights must be initialized by copying the weights obtained during pretraining**.\r\nThis is exactly what we want to do here.\r\n\r\nBy randomly initializing the new weights, the model performance would degrade as the model has to learn how to attend to other tokens from scratch in fine-tuning.\r\nWe cannot randomly initialize the new weights and that's why we copy the weights here.\r\nhttps://github.com/huggingface/transformers/blob/abaca9f9432a84cfaa95531de4c72334f38a42f2/src/transformers/models/luke/convert_luke_original_pytorch_checkpoint_to_pytorch.py#L61-L67\r\n\r\nSo, to achieve this and suppress warnings, I think there are some options🤔\r\n- leave the copied weights in the checkpoint and set `_keys_to_ignore_on_load_unexpected ` (this PR, an easy path)\r\n- remove the copied weights from the checkpoint and override `init_weights` or `post_init` in `LukeModel` to include the copying operation (which needs a bit of work)",
"Ok, thank you for the detailed information. I finally understand why you need those weights in the check point, as they are copied from some trained weight. \r\n\r\nI will have to think a bit more, but I feel the best is to add extra log message to explain the situation.\r\n\r\nI will come back to you.",
"@sgugger @amyeroberts \r\n\r\nCould you take a look the following and see if you have any comment. I tried to make it short, but still need to explain things 🙏 \r\n\r\nSummary:\r\n\r\n - In `studio-ousia/mluke-base-lite` (`LukeModel`) - checkpoint for original author):\r\n - the checkpoint contains some keys `w2e_query` etc. (for `entity_aware_attention`)\r\n - the config has `entity_aware_attention=False`:\r\n - `from_pretrained` gives `unexpected keys during loading` warning.\r\n - `entity_aware_attention` is never used during pre-training\r\n - the checkpoint contains those `w2e_query` weights **by coping weight values from other pre-trained weights**\r\n - (so they still make some sense and might be helpful for fine-tuning)\r\n - The model author wants to avoid confusing warning (of nexpected keys).\r\n\r\nTwo suggested actions:\r\n - (easy) add `_keys_to_ignore_on_load_unexpected` as done in this PR\r\n - (more work)\r\n - remove those `w2e_query` weights from the checkpoint `studio-ousia/mluke-base-lite`\r\n - overwrite `from_pretrained` to copy some weights values to the target weights (at the end of `from_pretrained)` - when `config.use_entity_aware_attention=True` + `w2e_query` key is found\r\n - we will have a warning of `missing key` during loading, but we add a explanation to mention weights being copied\r\n\r\nThe second approach may not be worth the effort (too much work). The first one isn't really good as `_keys_to_ignore_on_load_unexpected` is not designed to be used for such situation (IMO).\r\n\r\n\r\n",
"Note that on main, the code sample provided at the beginning does not issue any warnings (just infos) since the class used (LukeModel) is not the same as the class of the checkpoint (LukeModelForMaskedLM). It's only when loading a model `LukeModelForMaskedLM` that the warning appears.\r\n\r\nAs for how to deal with this, the checkpoint mentioned does not use those extra weights (as seen [here](https://huggingface.co/studio-ousia/mluke-base-lite/blob/main/config.json#L33) in the config) so it should probably not have them in the state dict. You can use the `variant` parameter in `from_pretrained` to offer two different files for the weights if you wanted to make one version with the extra weights, for users who would like to continue fine-tuning with those extra weights. That weight file should be named `pytorch_model.<variant_name>.bin`.",
"I see, it seems the sample code only issues warnings on Colab notebooks.\r\nApologies for the confusion.\r\n\r\nThank you, @sgugger, for the suggested solution. Using the variant parameter seems a better solution.\r\nI would also appreciate @ydshieh taking the time to handle this PR!\r\nI will consider the suggested solution, so close this PR."
] | 1,688 | 1,689 | 1,689 |
CONTRIBUTOR
| null |
# What does this PR do?
Suppress the warnings when instantiating the LUKE models by adding `_keys_to_ignore_on_load_unexpected`.
## Promblem
Currently, when you instantiate certain LUKE models from the Hugging Face Hub, such as
```
from transformers import AutoModel
model = transformers.AutoModel.from_pretrained("studio-ousia/mluke-base-lite")
```
you receive a warning indicating that a bunch of weights were not loaded.
```
Some weights of the model checkpoint at studio-ousia/mluke-base-lite were not used when initializing LukeModel: [
'luke.encoder.layer.0.attention.self.w2e_query.weight', 'luke.encoder.layer.0.attention.self.w2e_query.bias',
'luke.encoder.layer.0.attention.self.e2w_query.weight', 'luke.encoder.layer.0.attention.self.e2w_query.bias',
'luke.encoder.layer.0.attention.self.e2e_query.weight', 'luke.encoder.layer.0.attention.self.e2e_query.bias',
...]
```
This seems to depend on the logging setting and is observed on Google Colabo Notebooks.
https://colab.research.google.com/drive/1kYN3eGhx5tzEMnGkUz2jPsdmFyEBwxFA?usp=sharing
This behavior is expected since these weights are optional and only loaded when `use_entity_aware_attention` is set to `True`. However, it has caused confusion among users, as evidenced by the following issues:
https://github.com/studio-ousia/luke/issues/174
https://huggingface.co/studio-ousia/mluke-base/discussions/2#63be8cc6c26a8a4d713ee08a
## Solution
I added `_keys_to_ignore_on_load_unexpected` to `LukePreTrainedModel ` to ignore some unexpected keys in the pretrained weights.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24703/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24703/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24703",
"html_url": "https://github.com/huggingface/transformers/pull/24703",
"diff_url": "https://github.com/huggingface/transformers/pull/24703.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24703.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24702
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24702/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24702/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24702/events
|
https://github.com/huggingface/transformers/issues/24702
| 1,792,655,339 |
I_kwDOCUB6oc5q2b_r
| 24,702 |
bf16 with DeepSpeed stage 3 with CPU offload breaks LLaMA 13b+ training
|
{
"login": "alexgshaw",
"id": 47223609,
"node_id": "MDQ6VXNlcjQ3MjIzNjA5",
"avatar_url": "https://avatars.githubusercontent.com/u/47223609?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexgshaw",
"html_url": "https://github.com/alexgshaw",
"followers_url": "https://api.github.com/users/alexgshaw/followers",
"following_url": "https://api.github.com/users/alexgshaw/following{/other_user}",
"gists_url": "https://api.github.com/users/alexgshaw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexgshaw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexgshaw/subscriptions",
"organizations_url": "https://api.github.com/users/alexgshaw/orgs",
"repos_url": "https://api.github.com/users/alexgshaw/repos",
"events_url": "https://api.github.com/users/alexgshaw/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexgshaw/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Can you provide a minimal reproducer as I don't have access to `/mnt/pccfs2/backed_up/alexshaw/media-training/datasets/dataset`? A reproducer should be minimal and run without having us spend time changing and debugging things.\r\n\r\n\r\n\r\n",
"Okay, I built this repo that reproduces the issue with as few dependencies as possible.\r\n\r\nhttps://github.com/alexgshaw/simple-trainer\r\n\r\nYou should be able to clone the repo, pip install the requirements.txt and run bash `train.sh`\r\n\r\nWhen I ran it, it reproduced the error I described above.\r\n\r\nNote that I am running with Python version: 3.8.2 and cuda 11.4",
"I am running into a similar issue, are there any updates?\r\n\r\nI'm using python 3.11 and cuda 11.8",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"try zero stage3 change to zero stage2"
] | 1,688 | 1,698 | 1,692 |
NONE
| null |
TL;DR deepspeed stage 3 with cpu offload and bf16 breaks llama 13b+ when fine-tuning. The loss starts high and then immediately drops to 0 after the first step and learning rate stays 0 the entire time.
### System Info
- `transformers` version: 4.30.2
- Platform: Linux-3.10.0-1160.92.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.2
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Using 8 A100s in a training script.
- Using distributed or parallel set-up in script?: Using deepspeed stage 3 with CPU offload
### Who can help?
@sgugger @pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm running a really straightforward, bare-bones fine-tuning script to train llama 13b. The problem is, if I turn bf16 on, I run into the following problem:
```
0%| | 0/82 [00:00<?, ?it/s]
1%| | 1/82 [01:39<2:14:49, 99.87s/it]
{'loss': 8.2056, 'learning_rate': 0.0, 'epoch': 0.0}
1%| | 1/82 [01:39<2:14:49, 99.87s/it]
2%|▏ | 2/82 [02:55<1:54:16, 85.70s/it]
{'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.0}
2%|▏ | 2/82 [02:55<1:54:16, 85.70s/it]
4%|▎ | 3/82 [04:10<1:46:23, 80.81s/it]
{'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.01}
4%|▎ | 3/82 [04:10<1:46:23, 80.81s/it]
5%|▍ | 4/82 [05:26<1:42:33, 78.90s/it]
{'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.01}
5%|▍ | 4/82 [05:26<1:42:33, 78.90s/it]
6%|▌ | 5/82 [06:40<1:38:56, 77.10s/it]
{'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.01}
```
This continues for the remainder of the training, with the loss and learning rate never changing. By the end, the model outputs gibberish.
Here is my launch script:
```bash
deepspeed train.py \
--model_name_or_path /home/ashaw8/compute/models/$MODEL_NAME \
--dataset_path datasets/$TOPIC/$MODEL_NAME \
--run_name $RUN_NAME \
--bf16 True \
--output_dir $OUTPUT_DIR \
--num_train_epochs 3 \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 2 \
--gradient_accumulation_steps 8 \
--evaluation_strategy "no" \
--save_strategy "no" \
--logging_strategy "steps" \
--logging_steps 1 \
--learning_rate 5e-6 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--deepspeed ds_config.json \
--max_grad_norm 1.0 \
--tf32 False \
--report_to wandb
```
If I set bf16 to false, everything returns to normal and the training works fine, but then I cannot train the large models (e.g. 65b) because I can't reduce the model size with bf16. As far as I can tell, this issue has not been documented anywhere else.
A related issue seems to be documented here where similar problems occurred with fp16. I guess large losses were preventing the optimizer from stepping? The optimizer must be stepping in my case though, if the model is outputting gibberish by the end.
https://github.com/huggingface/transformers/issues/14531
Here is my actual python training script.
```[python]
from dataclasses import dataclass, field
from typing import Dict, Optional, Union
from transformers import (
TrainingArguments as HfTrainingArguments,
HfArgumentParser,
AutoConfig,
AutoTokenizer,
AutoModelForCausalLM,
Trainer,
DataCollatorForLanguageModeling,
)
from datasets import Dataset
@dataclass
class ModelArguments:
model_name_or_path: Optional[str] = field(
default="/mnt/pccfs2/backed_up/models/llama/hf/llama-7b-hf/"
)
@dataclass
class DataArguments:
dataset_path: str = field(
default="/mnt/pccfs2/backed_up/alexshaw/media-training/datasets/dataset",
metadata={"help": "Path to the training data."},
)
@dataclass
class TrainingArguments(HfTrainingArguments):
cache_dir: Optional[str] = field(default=None)
model_max_length: int = field(
default=512,
metadata={
"help": "Maximum sequence length. Sequences will be right padded (and possibly truncated)."
},
)
dataloader_num_workers: int = field(default=32)
if __name__ == "__main__":
parser = HfArgumentParser(
(ModelArguments, DataArguments, TrainingArguments) # type: ignore
)
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
config = AutoConfig.from_pretrained(
model_args.model_name_or_path,
cache_dir=training_args.cache_dir,
)
tokenizer = AutoTokenizer.from_pretrained(
model_args.model_name_or_path,
cache_dir=training_args.cache_dir,
model_max_length=training_args.model_max_length,
padding_side="right",
use_fast=False,
)
tokenizer.pad_token = tokenizer.eos_token
model = AutoModelForCausalLM.from_pretrained(
model_args.model_name_or_path,
config=config,
cache_dir=training_args.cache_dir,
)
dataset = Dataset.load_from_disk(data_args.dataset_path)
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=dataset, # type: ignore
data_collator=data_collator,
)
trainer.train()
trainer.save_state()
trainer.save_model()
```
Additionally, here is my `ds_config.json`
```json
{
"bf16": {
"enabled": "auto"
},
"fp16": {
"enabled": "auto"
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"total_num_steps": "auto",
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 5,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
### Expected behavior
The model should start with a higher loss and gradually decrease throughout training. The learning rate should rise to 5e-6 in a few steps.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24702/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24702/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24701
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24701/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24701/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24701/events
|
https://github.com/huggingface/transformers/issues/24701
| 1,792,609,426 |
I_kwDOCUB6oc5q2QyS
| 24,701 |
In RWForCausalLM.prepare_inputs_for_generation, the past_key_values are always None.
|
{
"login": "KexinFeng",
"id": 23562091,
"node_id": "MDQ6VXNlcjIzNTYyMDkx",
"avatar_url": "https://avatars.githubusercontent.com/u/23562091?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KexinFeng",
"html_url": "https://github.com/KexinFeng",
"followers_url": "https://api.github.com/users/KexinFeng/followers",
"following_url": "https://api.github.com/users/KexinFeng/following{/other_user}",
"gists_url": "https://api.github.com/users/KexinFeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KexinFeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KexinFeng/subscriptions",
"organizations_url": "https://api.github.com/users/KexinFeng/orgs",
"repos_url": "https://api.github.com/users/KexinFeng/repos",
"events_url": "https://api.github.com/users/KexinFeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/KexinFeng/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @KexinFeng \r\nThere is an ongoing work to port Falcon to transformers here: https://github.com/huggingface/transformers/pull/24523 looking at that PR I believe that your issue will be fixed once merged. cc @Rocketknight1 in case I missed something!",
"Sorry for the delay, and yes! There is an issue with the custom code version of Falcon, which means that frequently past_key_values are not actually used in generation. This results in much lower generation speed (~3X slower for short-medium sequences).\r\n\r\nThis issue will be fixed once we add Falcon as a full library model in `transformers`, and we're hoping to merge that PR extremely soon.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"This is imminent, by the by, and sorry for the delay! Should be in within the next day or two.",
"If anyone cant wait a few days, you can use the model here: https://github.com/kimborgen/falcon-llm \r\n\r\n@Rocketknight1 Do you know if the transformer library takes advantage of the pararell MLP/Attention layer architecture and automatically calculates these two layers in pararell if there is enough capacity on the GPU? Or how could I enable such behaviour?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @kimborgen, yes, MLP and attention are parallel paths on the newer Falcon models, rather than sequential like they are on older transformers. You can see this in the code for `FalconDecoderLayer` - when `parallel_attn` or `new_decoder_architecture` are set, layer norms and MLP/attention follow separate, parallel paths. On the oldest Falcon models (e.g. `falcon-rw-1b`) I believe they're still sequential.\r\n\r\nNote that you should not change these settings in the config of an existing model! You'll get different outputs and the pretrained weights will be useless to you. They can only be set when the model is first initialized.\r\n\r\nAlso, since Falcon has now been fully ported into `transformers`, the original issue here has been resolved and I'm going to close this issue!"
] | 1,688 | 1,694 | 1,694 |
NONE
| null |
### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.15.0-1038-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
model_name = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_name, trust_remote_code=True, device_map="auto")
# encode context the generation is conditioned on
input_ids = tokenizer.encode('The new movie that got Oscar this year', return_tensors='pt')
# device
device = "cuda" if torch.cuda.is_available() else "cpu"
input_ids = input_ids.to(device)
# model = model.to(device)
# %% Greedy search
# generate text until the output length (which includes the context length) reaches 50
greedy_output = model.generate(input_ids, max_length=50)
print("\nOutput:\n" + 100 * '-')
print(tokenizer.decode(greedy_output[0], skip_special_tokens=True))
# Contrastive search
# activate beam search and early_stopping
output = model.generate(input_ids, penalty_alpha=0.01, top_k=4, max_length=50)
print("\nOutput:\n" + 100 * '-')
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
### Expected behavior
In `transformers/generation/utils.py#L2329`
`model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)`
`RWForCausalLM.prepare_inputs_for_generation()` always return `None` `past_key_values`. So the result doesn’t seem to utilize the kv_cache at all. On the other hand, in `RWForCausalLM.prepare_inputs_for_generation()` they do have tensor shape conversion code. Is this design that `past_key_values` is always None intentional?
Also the output text is also wired:
```
Output(greedy)
----------------------------------------------------------------------------------------------------
The new movie that got Oscar this year is a movie about a man who is a genius and a man who is a genius.
The movie is called “The Imitation Game” and it is about a man who is a genius and a
Output(contrastive with penalty_alpha=0.001)
----------------------------------------------------------------------------------------------------
The new movie that got Oscar this year is a (Source:
- (Source:
- (Source:
- (Source:
- (Source:
- (Source:
- (Source:
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24701/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24701/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24700
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24700/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24700/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24700/events
|
https://github.com/huggingface/transformers/issues/24700
| 1,792,517,464 |
I_kwDOCUB6oc5q16VY
| 24,700 |
Pix2StructImageProcessor does not accept list of PIL Images
|
{
"login": "LiJunnan1992",
"id": 13638455,
"node_id": "MDQ6VXNlcjEzNjM4NDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/13638455?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LiJunnan1992",
"html_url": "https://github.com/LiJunnan1992",
"followers_url": "https://api.github.com/users/LiJunnan1992/followers",
"following_url": "https://api.github.com/users/LiJunnan1992/following{/other_user}",
"gists_url": "https://api.github.com/users/LiJunnan1992/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LiJunnan1992/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LiJunnan1992/subscriptions",
"organizations_url": "https://api.github.com/users/LiJunnan1992/orgs",
"repos_url": "https://api.github.com/users/LiJunnan1992/repos",
"events_url": "https://api.github.com/users/LiJunnan1992/events{/privacy}",
"received_events_url": "https://api.github.com/users/LiJunnan1992/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi. Could you show us the full error log, please. Thanks.\r\n\r\ncc @amyeroberts ",
"Hi @LiJunnan1992 \r\n\r\nThe script below seems to work on the main branch of transformers. Can you share a reproducible snippet? 🙏 Thanks!\r\n\r\n```python\r\nimport requests\r\nfrom PIL import Image\r\nfrom transformers import Pix2StructProcessor\r\n\r\nprocessor = Pix2StructProcessor.from_pretrained(\"google/pix2struct-textcaps-base\")\r\n\r\nurl = \"https://www.ilankelman.org/stopsigns/australia.jpg\"\r\nimages = [Image.open(requests.get(url, stream=True).raw) for _ in range(4)]\r\n\r\ninputs = processor(images, return_tensors=\"pt\")\r\n```\r\n\r\nThis script works as well:\r\n\r\n```python\r\nimport requests\r\nfrom PIL import Image\r\nfrom transformers import Pix2StructProcessor, Pix2StructImageProcessor\r\n\r\nprocessor = Pix2StructProcessor.from_pretrained(\"google/pix2struct-textcaps-base\")\r\n\r\nimage_processor = Pix2StructImageProcessor()\r\n\r\nurl = \"https://www.ilankelman.org/stopsigns/australia.jpg\"\r\nimages = [Image.open(requests.get(url, stream=True).raw) for _ in range(4)]\r\n\r\n_ = processor(images, return_tensors=\"pt\")\r\n_ = image_processor(images, return_tensors=\"pt\")\r\n```",
"The main branch indeed works without error. Closing this issue. Thanks!"
] | 1,688 | 1,688 | 1,688 |
NONE
| null |
### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.10.133+-x86_64-with-glibc2.10
- Python version: 3.8.13
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
### Who can help?
@younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Pix2StructImageProcessor does not work if I pass in a list of PIL images as input. It works after I uncomment line 373-379: https://github.com/huggingface/transformers/blob/66fd3a8d626a32989f4569260db32785c6cbf42a/src/transformers/models/pix2struct/image_processing_pix2struct.py#L373
### Expected behavior
According to the documentation, Pix2StructImageProcessor should be able to process list of PIL images.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24700/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24700/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24699
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24699/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24699/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24699/events
|
https://github.com/huggingface/transformers/pull/24699
| 1,792,440,656 |
PR_kwDOCUB6oc5U28HZ
| 24,699 |
Bump scipy from 1.8.0 to 1.10.0 in /examples/research_projects/decision_transformer
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24699). All of your documentation changes will be reflected on that endpoint.",
"OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting `@dependabot ignore this major version` or `@dependabot ignore this minor version`.\n\nIf you change your mind, just re-open this PR and I'll resolve any conflicts on it.",
"@dependabot ignore this major version",
"OK, I won't notify you about version 1.x.x again, unless you re-open this PR. 😢"
] | 1,688 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
Bumps [scipy](https://github.com/scipy/scipy) from 1.8.0 to 1.10.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/scipy/scipy/releases">scipy's releases</a>.</em></p>
<blockquote>
<h1>SciPy 1.10.0 Release Notes</h1>
<p>SciPy <code>1.10.0</code> is the culmination of <code>6</code> months of hard work. It contains
many new features, numerous bug-fixes, improved test coverage and better
documentation. There have been a number of deprecations and API changes
in this release, which are documented below. All users are encouraged to
upgrade to this release, as there are a large number of bug-fixes and
optimizations. Before upgrading, we recommend that users check that
their own code does not use deprecated SciPy functionality (to do so,
run your code with <code>python -Wd</code> and check for <code>DeprecationWarning</code> s).
Our development attention will now shift to bug-fix releases on the
1.10.x branch, and on adding new features on the main branch.</p>
<p>This release requires Python <code>3.8+</code> and NumPy <code>1.19.5</code> or greater.</p>
<p>For running on PyPy, PyPy3 <code>6.0+</code> is required.</p>
<h1>Highlights of this release</h1>
<ul>
<li>A new dedicated datasets submodule (<code>scipy.datasets</code>) has been added, and is
now preferred over usage of <code>scipy.misc</code> for dataset retrieval.</li>
<li>A new <code>scipy.interpolate.make_smoothing_spline</code> function was added. This
function constructs a smoothing cubic spline from noisy data, using the
generalized cross-validation (GCV) criterion to find the tradeoff between
smoothness and proximity to data points.</li>
<li><code>scipy.stats</code> has three new distributions, two new hypothesis tests, three
new sample statistics, a class for greater control over calculations
involving covariance matrices, and many other enhancements.</li>
</ul>
<h1>New features</h1>
<h1><code>scipy.datasets</code> introduction</h1>
<ul>
<li>A new dedicated <code>datasets</code> submodule has been added. The submodules
is meant for datasets that are relevant to other SciPy submodules ands
content (tutorials, examples, tests), as well as contain a curated
set of datasets that are of wider interest. As of this release, all
the datasets from <code>scipy.misc</code> have been added to <code>scipy.datasets</code>
(and deprecated in <code>scipy.misc</code>).</li>
<li>The submodule is based on <a href="https://www.fatiando.org/pooch/latest/">Pooch</a>
(a new optional dependency for SciPy), a Python package to simplify fetching
data files. This move will, in a subsequent release, facilitate SciPy
to trim down the sdist/wheel sizes, by decoupling the data files and
moving them out of the SciPy repository, hosting them externally and</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/scipy/scipy/commit/dde50595862a4f9cede24b5d1c86935c30f1f88a"><code>dde5059</code></a> REL: 1.10.0 final [wheel build]</li>
<li><a href="https://github.com/scipy/scipy/commit/7856f281b016c585b82d03723c4494bcdbdcd4a5"><code>7856f28</code></a> Merge pull request <a href="https://redirect.github.com/scipy/scipy/issues/17696">#17696</a> from tylerjereddy/treddy_110_final_prep</li>
<li><a href="https://github.com/scipy/scipy/commit/205b6243c6d075d05695e7ac6d007e0f03bfbf42"><code>205b624</code></a> DOC: add missing author</li>
<li><a href="https://github.com/scipy/scipy/commit/1ab9f1b10145f0a974d5531700e72d1fb4229b76"><code>1ab9f1b</code></a> DOC: update 1.10.0 relnotes</li>
<li><a href="https://github.com/scipy/scipy/commit/ac2f45fbe1e39a8f52c1ea2e68764009f02973c0"><code>ac2f45f</code></a> MAINT: integrate._qmc_quad: mark as private with preceding underscore</li>
<li><a href="https://github.com/scipy/scipy/commit/3e0ae1a21f51ebee3a77733c42700d87a0c35d7d"><code>3e0ae1a</code></a> REV: integrate.qmc_quad: delay release to SciPy 1.11.0</li>
<li><a href="https://github.com/scipy/scipy/commit/34cdf05c86548de1c4ca1b2798cdc23885af807b"><code>34cdf05</code></a> MAINT: FFT pybind11 fixups</li>
<li><a href="https://github.com/scipy/scipy/commit/843500aabde17aaf1eec65c589d50bd12ee35039"><code>843500a</code></a> Merge pull request <a href="https://redirect.github.com/scipy/scipy/issues/17689">#17689</a> from mdhaber/gh17686</li>
<li><a href="https://github.com/scipy/scipy/commit/089924b61012a106ffa4f58939b0180124051a0b"><code>089924b</code></a> REL: integrate.qmc_quad: remove from release notes</li>
<li><a href="https://github.com/scipy/scipy/commit/3e47110f10e3267d228e9da84174f3cee325e7c3"><code>3e47110</code></a> REL: 1.10.0rc3 unreleased</li>
<li>Additional commits viewable in <a href="https://github.com/scipy/scipy/compare/v1.8.0...v1.10.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24699/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24699/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24699",
"html_url": "https://github.com/huggingface/transformers/pull/24699",
"diff_url": "https://github.com/huggingface/transformers/pull/24699.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24699.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24698
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24698/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24698/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24698/events
|
https://github.com/huggingface/transformers/issues/24698
| 1,792,324,677 |
I_kwDOCUB6oc5q1LRF
| 24,698 |
Assertion `srcIndex < srcSelectDimSize` failed
|
{
"login": "MaggieK410",
"id": 74720920,
"node_id": "MDQ6VXNlcjc0NzIwOTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/74720920?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MaggieK410",
"html_url": "https://github.com/MaggieK410",
"followers_url": "https://api.github.com/users/MaggieK410/followers",
"following_url": "https://api.github.com/users/MaggieK410/following{/other_user}",
"gists_url": "https://api.github.com/users/MaggieK410/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MaggieK410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MaggieK410/subscriptions",
"organizations_url": "https://api.github.com/users/MaggieK410/orgs",
"repos_url": "https://api.github.com/users/MaggieK410/repos",
"events_url": "https://api.github.com/users/MaggieK410/events{/privacy}",
"received_events_url": "https://api.github.com/users/MaggieK410/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @MaggieK410, thanks for reporting this issue. \r\n\r\nThis is typically caused by an indexing issue in the code. \r\n\r\nCould you follow the issue template and:\r\n* Provide information about the running environment: run `transformers-cli env` in the terminal and copy-paste the output\r\n* Format the code examples. All code should be sandwiched between three backticks ` ``` all code goes here ``` `\r\n* Could you also put the error message in code formatting please? \r\n* Provide a checkpoint - which medalpaca model is being tested? \r\n* Ensure the example code is runnable? `dataset` is not defined ",
"Hi, thank you very much for getting back to me! I have made a mistake when initializing the tokenizer (I added tokens withoud resizing the embedding). As it is solved, I will close this issue.",
"> Hi, thank you very much for getting back to me! I have made a mistake when initializing the tokenizer (I added tokens withoud resizing the embedding). As it is solved, I will close this issue.\r\n\r\nMay I know how you solve the problem? Thank you very much in advance!\r\n",
"In another part of the code I added a token but did not change the embedding size, which lead to the issue above. Since I did not need that token, I just removed that line and the code worked, but if you need to add the token, maybe look into changing your embeddings (https://stackoverflow.com/questions/72775559/resize-token-embeddings-on-the-a-pertrained-model-with-different-embedding-size)"
] | 1,688 | 1,699 | 1,689 |
NONE
| null |
Hi,
I am running medalpaca (but the error seems to come from llama) on 4 GPUs using device map="auto" and the SFTTrainer and want to prompt tune the model. I have written a custom Dataset class:
class DiagnosesDataset(torch.utils.data.Dataset):
def __init__(self, instances, tokenizer):
self.instances=instances
self.tokenizer=tokenizer
def __getitem__(self, idx):
item={}
prompt= self.instances["prompt"][idx]
labels = self.instances["label"][idx]
item=self.tokenize(prompt+labels)
tokenized_instruction=self.tokenize(prompt)
label_instruction=self.tokenizer(labels)
i=len(tokenized_instruction["input_ids"])
item["labels"][i:]=label_instruction["input_ids"]
return item
def tokenize(self, prompt):
result_prompt=self.tokenizer(prompt,
truncation=True,
max_length=2048,
padding=False,
return_tensors=None)
result_prompt["labels"]=[-100]*len(result_prompt["input_ids"])
return result_prompt
def __len__(self):
return len(self.instances)
The Training Arguments and Peft config:
training_arguments=TrainingArguments(
output_dir="./falcon_output_dir",
per_device_train_batch_size=4,
gradient_accumulation_steps=2,
optim="paged_adamw_32bit",
save_steps=100,
logging_steps=10,
learning_rate=2e-4,
max_steps=10000,
fp16=False,
bf16=False,
lr_scheduler_type="constant",
warmup_ratio=0.03,
group_by_length=True,
remove_unused_columns=False)
peft_config=LoraConfig(
lora_alpha=16,
lora_dropout=0.1,
r=4,
bias="none",
task_type=TaskType.CAUSAL_LM,
target_modules=["q_proj", "v_proj"])
The SFTTrainer I am using looks like this:
trainer=SFTTrainer(
model=model,
tokenizer=tokenizer,
train_dataset=dataset,
peft_config=peft_config,
packing=True,
args=training_arguments)
trainer.train()
However, when running the model, somewhere there seems to be an issue with some indices (https://discuss.pytorch.org/t/solved-assertion-srcindex-srcselectdimsize-failed-on-gpu-for-torch-cat/1804/27)
The error I am getting is this:
╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /home/students/kulcsar/Bachelor/for_dataset/10000_diagnoses/falcon_model_pef │
│ t.py:544 in <module> │
│ │
│ 541 │ │
│ 542 │ │
│ 543 │ args=parser.parse_args() │
│ ❱ 544 │ run() │
│ 545 │ #main() │
│ 546 │ │
│ 547 │ #all_data, prompts, golds=preprocess("./dataset.pkl") │
│ │
│ /home/students/kulcsar/Bachelor/for_dataset/10000_diagnoses/falcon_model_pef │
│ t.py:153 in run │
│ │
│ 150 │ │ packing=True, │
│ 151 │ │ data_collator=DataCollatorForSeq2Seq(tokenizer, pad_to_multipl │
│ 152 │ │ args=training_arguments) │
│ ❱ 153 │ trainer.train() │
│ 154 │ │
│ 155 │ logging.info("Run Train loop") │
│ 156 │ #model_updated=train(model, dataset, args.seed, args.batch_size, a │
│ │
│ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py │
│ thon3.9/site-packages/transformers/trainer.py:1537 in train │
│ │
│ 1534 │ │ inner_training_loop = find_executable_batch_size( │
│ 1535 │ │ │ self._inner_training_loop, self._train_batch_size, args.a │
│ 1536 │ │ ) │
│ ❱ 1537 │ │ return inner_training_loop( │
│ 1538 │ │ │ args=args, │
│ 1539 │ │ │ resume_from_checkpoint=resume_from_checkpoint, │
│ 1540 │ │ │ trial=trial, │
│ │
│ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py │
│ thon3.9/site-packages/transformers/trainer.py:1802 in _inner_training_loop │
│ │
│ 1799 │ │ │ │ │ self.control = self.callback_handler.on_step_begi │
│ 1800 │ │ │ │ │
│ 1801 │ │ │ │ with self.accelerator.accumulate(model): │
│ ❱ 1802 │ │ │ │ │ tr_loss_step = self.training_step(model, inputs) │
│ 1803 │ │ │ │ │
│ 1804 │ │ │ │ if ( │
│ 1805 │ │ │ │ │ args.logging_nan_inf_filter │
│ │
│ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py │
│ thon3.9/site-packages/transformers/trainer.py:2647 in training_step │
│ │
│ 2644 │ │ │ return loss_mb.reduce_mean().detach().to(self.args.device │
│ 2645 │ │ │
│ 2646 │ │ with self.compute_loss_context_manager(): │
│ ❱ 2647 │ │ │ loss = self.compute_loss(model, inputs) │
│ 2648 │ │ │
│ 2649 │ │ if self.args.n_gpu > 1: │
│ 2650 │ │ │ loss = loss.mean() # mean() to average on multi-gpu para │
│ │
│ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py │
│ thon3.9/site-packages/transformers/trainer.py:2672 in compute_loss │
│ │
│ 2669 │ │ │ labels = inputs.pop("labels") │
│ 2670 │ │ else: │
│ 2671 │ │ │ labels = None │
│ ❱ 2672 │ │ outputs = model(**inputs) │
│ 2673 │ │ # Save past state if it exists │
│ 2674 │ │ # TODO: this needs to be fixed and made cleaner later. │
│ 2675 │ │ if self.args.past_index >= 0: │
│ │
│ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py │
│ thon3.9/site-packages/torch/nn/modules/module.py:1502 in _wrapped_call_impl │
│ │
│ 1499 │ │ if self._compiled_call_impl is not None: │
│ 1500 │ │ │ return self._compiled_call_impl(*args, **kwargs) # type: │
│ 1501 │ │ else: │
│ ❱ 1502 │ │ │ return self._call_impl(*args, **kwargs) │
│ 1503 │ │
│ 1504 │ def _call_impl(self, *args, **kwargs): │
│ 1505 │ │ forward_call = (self._slow_forward if torch._C._get_tracing_s │
│ │
│ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py │
│ thon3.9/site-packages/torch/nn/modules/module.py:1511 in _call_impl │
│ │
│ 1508 │ │ if not (self._backward_hooks or self._backward_pre_hooks or s │
│ 1509 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hoo │
│ 1510 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks │
│ ❱ 1511 │ │ │ return forward_call(*args, **kwargs) │
│ 1512 │ │ # Do not call functions when jit is used │
│ 1513 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1514 │ │ backward_pre_hooks = [] │
│ │
│ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py │
│ thon3.9/site-packages/peft/peft_model.py:739 in forward │
│ │
│ 736 │ ): │
│ 737 │ │ peft_config = self.active_peft_config │
│ 738 │ │ if not isinstance(peft_config, PromptLearningConfig): │
│ ❱ 739 │ │ │ return self.base_model( │
│ 740 │ │ │ │ input_ids=input_ids, │
│ 741 │ │ │ │ attention_mask=attention_mask, │
│ 742 │ │ │ │ inputs_embeds=inputs_embeds, │
│ │
│ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py │
│ thon3.9/site-packages/torch/nn/modules/module.py:1502 in _wrapped_call_impl │
│ │
│ 1499 │ │ if self._compiled_call_impl is not None: │
│ 1500 │ │ │ return self._compiled_call_impl(*args, **kwargs) # type: │
│ 1501 │ │ else: │
│ ❱ 1502 │ │ │ return self._call_impl(*args, **kwargs) │
│ 1503 │ │
│ 1504 │ def _call_impl(self, *args, **kwargs): │
│ 1505 │ │ forward_call = (self._slow_forward if torch._C._get_tracing_s │
│ │
│ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py │
│ thon3.9/site-packages/torch/nn/modules/module.py:1511 in _call_impl │
│ │
│ 1508 │ │ if not (self._backward_hooks or self._backward_pre_hooks or s │
│ 1509 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hoo │
│ 1510 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks │
│ ❱ 1511 │ │ │ return forward_call(*args, **kwargs) │
│ 1512 │ │ # Do not call functions when jit is used │
│ 1513 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1514 │ │ backward_pre_hooks = [] │
│ │
│ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py │
│ thon3.9/site-packages/accelerate/hooks.py:165 in new_forward │
│ │
│ 162 │ │ │ with torch.no_grad(): │
│ 163 │ │ │ │ output = old_forward(*args, **kwargs) │
│ 164 │ │ else: │
│ ❱ 165 │ │ │ output = old_forward(*args, **kwargs) │
│ 166 │ │ return module._hf_hook.post_forward(module, output) │
│ 167 │ │
│ 168 │ module.forward = new_forward │
│ │
│ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py │
│ thon3.9/site-packages/transformers/models/llama/modeling_llama.py:691 in │
│ forward │
│ │
│ 688 │ │ return_dict = return_dict if return_dict is not None else self │
│ 689 │ │ │
│ 690 │ │ # decoder outputs consists of (dec_features, layer_state, dec_ │
│ ❱ 691 │ │ outputs = self.model( │
│ 692 │ │ │ input_ids=input_ids, │
│ 693 │ │ │ attention_mask=attention_mask, │
│ 694 │ │ │ position_ids=position_ids, │
│ │
│ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py │
│ thon3.9/site-packages/torch/nn/modules/module.py:1502 in _wrapped_call_impl │
│ │
│ 1499 │ │ if self._compiled_call_impl is not None: │
│ 1500 │ │ │ return self._compiled_call_impl(*args, **kwargs) # type: │
│ 1501 │ │ else: │
│ ❱ 1502 │ │ │ return self._call_impl(*args, **kwargs) │
│ 1503 │ │
│ 1504 │ def _call_impl(self, *args, **kwargs): │
│ 1505 │ │ forward_call = (self._slow_forward if torch._C._get_tracing_s │
│ │
│ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py │
│ thon3.9/site-packages/torch/nn/modules/module.py:1511 in _call_impl │
│ │
│ 1508 │ │ if not (self._backward_hooks or self._backward_pre_hooks or s │
│ 1509 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hoo │
│ 1510 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks │
│ ❱ 1511 │ │ │ return forward_call(*args, **kwargs) │
│ 1512 │ │ # Do not call functions when jit is used │
│ 1513 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1514 │ │ backward_pre_hooks = [] │
│ │
│ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py │
│ thon3.9/site-packages/transformers/models/llama/modeling_llama.py:532 in │
│ forward │
│ │
│ 529 │ │ │ position_ids = position_ids.view(-1, seq_length).long() │
│ 530 │ │ │
│ 531 │ │ if inputs_embeds is None: │
│ ❱ 532 │ │ │ inputs_embeds = self.embed_tokens(input_ids) │
│ 533 │ │ # embed positions │
│ 534 │ │ if attention_mask is None: │
│ 535 │ │ │ attention_mask = torch.ones( │
│ │
│ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py │
│ thon3.9/site-packages/torch/nn/modules/module.py:1502 in _wrapped_call_impl │
│ │
│ 1499 │ │ if self._compiled_call_impl is not None: │
│ 1500 │ │ │ return self._compiled_call_impl(*args, **kwargs) # type: │
│ 1501 │ │ else: │
│ ❱ 1502 │ │ │ return self._call_impl(*args, **kwargs) │
│ 1503 │ │
│ 1504 │ def _call_impl(self, *args, **kwargs): │
│ 1505 │ │ forward_call = (self._slow_forward if torch._C._get_tracing_s │
│ │
│ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py │
│ thon3.9/site-packages/torch/nn/modules/module.py:1511 in _call_impl │
│ │
│ 1508 │ │ if not (self._backward_hooks or self._backward_pre_hooks or s │
│ 1509 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hoo │
│ 1510 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks │
│ ❱ 1511 │ │ │ return forward_call(*args, **kwargs) │
│ 1512 │ │ # Do not call functions when jit is used │
│ 1513 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1514 │ │ backward_pre_hooks = [] │
│ │
│ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py │
│ thon3.9/site-packages/accelerate/hooks.py:165 in new_forward │
│ │
│ 162 │ │ │ with torch.no_grad(): │
│ 163 │ │ │ │ output = old_forward(*args, **kwargs) │
│ 164 │ │ else: │
│ ❱ 165 │ │ │ output = old_forward(*args, **kwargs) │
│ 166 │ │ return module._hf_hook.post_forward(module, output) │
│ 167 │ │
│ 168 │ module.forward = new_forward │
│ │
│ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py │
│ thon3.9/site-packages/torch/nn/modules/sparse.py:162 in forward │
│ │
│ 159 │ │ │ │ self.weight[self.padding_idx].fill_(0) │
│ 160 │ │
│ 161 │ def forward(self, input: Tensor) -> Tensor: │
│ ❱ 162 │ │ return F.embedding( │
│ 163 │ │ │ input, self.weight, self.padding_idx, self.max_norm, │
│ 164 │ │ │ self.norm_type, self.scale_grad_by_freq, self.sparse) │
│ 165 │
│ │
│ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py │
│ thon3.9/site-packages/torch/nn/functional.py:2238 in embedding │
│ │
│ 2235 │ │ # torch.embedding_renorm_ │
│ 2236 │ │ # remove once script supports set_grad_enabled │
│ 2237 │ │ _no_grad_embedding_renorm_(weight, input, max_norm, norm_type │
│ ❱ 2238 │ return torch.embedding(weight, input, padding_idx, scale_grad_by_ │
│ 2239 │
│ 2240 │
│ 2241 def embedding_bag( │
╰──────────────────────────────────────────────────────────────────────────────╯
RuntimeError: CUDA error: device-side assert triggered
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Does anyone have an idea, what might be the issue? Any help would be greatly appreciated!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24698/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24698/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24697
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24697/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24697/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24697/events
|
https://github.com/huggingface/transformers/issues/24697
| 1,792,101,361 |
I_kwDOCUB6oc5q0Uvx
| 24,697 |
`Trainer` class on Mac uses `accelerate` to incorrectly set MPS device
|
{
"login": "alex2awesome",
"id": 3460632,
"node_id": "MDQ6VXNlcjM0NjA2MzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3460632?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alex2awesome",
"html_url": "https://github.com/alex2awesome",
"followers_url": "https://api.github.com/users/alex2awesome/followers",
"following_url": "https://api.github.com/users/alex2awesome/following{/other_user}",
"gists_url": "https://api.github.com/users/alex2awesome/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alex2awesome/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex2awesome/subscriptions",
"organizations_url": "https://api.github.com/users/alex2awesome/orgs",
"repos_url": "https://api.github.com/users/alex2awesome/repos",
"events_url": "https://api.github.com/users/alex2awesome/events{/privacy}",
"received_events_url": "https://api.github.com/users/alex2awesome/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"EDIT: \r\n\r\nAdding the flag `--no_cuda` in `TrainingArgs` takes care of this issue. \r\n\r\nI suggest making it something like `--use_cpu` or `--no_cuda_or_mps`, because i totally didn't realize it could be used for this purpose and had to dive to the very bottom of the code-base to see.",
"I am not really an expert on this topic, but do you think #24660 will help?",
"If not, a reproducible script is indeed necessary, please 🙏 ",
"I have a similar issue as the Trainer was automatically using the MPS backend and couldn't figure out a way of running on CPU. (The MPS backend is missing some operations, so no all models runs!).\r\nUsing `no_cuda=True` in the `TrainerArgs` solved the issue! pretty unintuitive!",
"cc @SunMarc Maybe we could deprecate the `no_cuda` flag to replace it with `use_cpu`, which would be more intuitive?",
"Yes, we should do that since we will automatically set the device to `cuda` or `mps` if available. Furthermore, `use_mps_device` in `TrainingArgs` is also deprecated. I will open a PR for that. The other issue is that we don't dispatch the data in the right device. @muellerzr, I see that we don't move the `dataloader` to a specific device in [`get_train_dataloader`](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L3834-L3836). Is this something we want to add ? I can open a PR for it if needed. ",
"@SunMarc accelerate does this automatically in its dataloader/with the Accelerator, so this should be already happening. If not, it's something we need to fix in accelerate",
"There is also another issue that the default device is `mps` but the data is not moved to `mps`, so the Trainer raises an error, minimal code:\r\n```python \r\nfrom transformers import AutoTokenizer\r\nfrom datasets import load_dataset\r\nfrom transformers import AutoModelForCausalLM\r\nfrom transformers import Trainer, TrainingArguments\r\n\r\nmodel_checkpoint = \"roneneldan/TinyStories-33M\"\r\nds = load_dataset('MohamedRashad/characters_backstories')[\"train\"]\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True)\r\ntokenizer.pad_token = tokenizer.eos_token\r\n\r\ndef tokenize_function(example):\r\n merged = example[\"text\"] + \" \" + example[\"target\"]\r\n batch = tokenizer(merged, padding='max_length', truncation=True, max_length=128)\r\n batch[\"labels\"] = batch[\"input_ids\"].copy()\r\n return batch\r\n\r\ntokenized_dataset = ds.map(tokenize_function, remove_columns=[\"text\", \"target\"])\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(model_checkpoint);\r\n\r\ntraining_args = TrainingArguments(\r\n num_train_epochs=1,\r\n output_dir=\".\",\r\n # use_mps_device=True,\r\n)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=tokenized_dataset,\r\n)\r\n\r\nprint(trainer.accelerator.device)\r\n# device(\"mps\")\r\n\r\n# Let's train!\r\ntrainer.train()\r\n```\r\n\r\nYou can solve the issue by explicitly using `use_mps_device=True` or `no_cuda=True` on the `TrainingArgs`\r\n\r\nPD: I am on latest of `transformers`, `datasets` and `accelerate` (pip install -U ....)\r\n",
"Hey @tcapelle , thanks for the snippet. It helps a lot to solve the issue. I was able to reproduce the bug on the latest version of `transformers`. This bug is fixed on the main branch of `transformers` that you can download with `pip install https://github.com/huggingface/transformers.git`. Let me know if it works on your side. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,692 | 1,692 |
NONE
| null |
### System Info
transformers==4.30.2
Mac 2019, Ventura 13.4
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
ISSUE: I am running a generic model training using Trainer on my mac, locally. My model is being moved to MPS, but my tensors are staying on CPU.
I can provide more details about my script, but I kinda expect that this is a general library problem. Here's the lines of code I discovered:
When the [accelerator is instantiated in the Trainer class](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L3834-L3836), it doesn't get passed any user-specific arguments, like [this from TrainingArgs for e.g](https://github.com/huggingface/transformers/blob/main/src/transformers/training_args.py#L586-L587) to give the user control over which device to use. As a result, when running locally on Mac, Accelerate does a lot of inference about which device we want to use, and [moves the model to `self.device`](https://github.com/huggingface/accelerate/blob/main/src/accelerate/accelerator.py#L1289) in the non-distributed setting. I'm not sure yet how `self.device` is instantiated in Accelerate, however, `Trainer` doesn't natively move my data to `mps`, so my script is crashing.
### Expected behavior
Ideally, I have a flag I can pass into `Trainer` to help me not MPS if I don't want to, and just stick with CPU.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24697/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24697/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24696
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24696/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24696/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24696/events
|
https://github.com/huggingface/transformers/pull/24696
| 1,792,070,842 |
PR_kwDOCUB6oc5U1pDn
| 24,696 |
Removing unnecessary `device=device` in modeling_llama.py
|
{
"login": "Liyang90",
"id": 17171233,
"node_id": "MDQ6VXNlcjE3MTcxMjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/17171233?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Liyang90",
"html_url": "https://github.com/Liyang90",
"followers_url": "https://api.github.com/users/Liyang90/followers",
"following_url": "https://api.github.com/users/Liyang90/following{/other_user}",
"gists_url": "https://api.github.com/users/Liyang90/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Liyang90/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Liyang90/subscriptions",
"organizations_url": "https://api.github.com/users/Liyang90/orgs",
"repos_url": "https://api.github.com/users/Liyang90/repos",
"events_url": "https://api.github.com/users/Liyang90/events{/privacy}",
"received_events_url": "https://api.github.com/users/Liyang90/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,688 | 1,689 | 1,689 |
CONTRIBUTOR
| null |
Removing unnecessary `device=device`
# What does this PR do?
Removing unnecessary `device=device` in the second argument to `torch.full`.
`torch.full` expects a scaler for the second argument: https://pytorch.org/docs/stable/generated/torch.full.html
So if a device tensor is passed to it, the tensor needs to be synced and sent to CPU first. On TPU, this blocks the tracing of the current iteration that should be overlapped with graph execution of the previous iteration.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24696/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24696/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24696",
"html_url": "https://github.com/huggingface/transformers/pull/24696",
"diff_url": "https://github.com/huggingface/transformers/pull/24696.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24696.patch",
"merged_at": 1689240622000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24695
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24695/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24695/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24695/events
|
https://github.com/huggingface/transformers/issues/24695
| 1,792,044,266 |
I_kwDOCUB6oc5q0Gzq
| 24,695 |
Time Series Transformer - Dynamic Categorical Features
|
{
"login": "guyko81",
"id": 10399767,
"node_id": "MDQ6VXNlcjEwMzk5NzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/10399767?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guyko81",
"html_url": "https://github.com/guyko81",
"followers_url": "https://api.github.com/users/guyko81/followers",
"following_url": "https://api.github.com/users/guyko81/following{/other_user}",
"gists_url": "https://api.github.com/users/guyko81/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guyko81/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guyko81/subscriptions",
"organizations_url": "https://api.github.com/users/guyko81/orgs",
"repos_url": "https://api.github.com/users/guyko81/repos",
"events_url": "https://api.github.com/users/guyko81/events{/privacy}",
"received_events_url": "https://api.github.com/users/guyko81/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] |
open
| false | null |
[] |
[
"cc @kashif ",
"@guyko81 yes sure! I would be happy to help you get this done. I never found a good example of dynamic categorical features, so if you have some sample example that would be really helpful. \r\n\r\nWe can assume that the dataset has a key e.g. \r\n\r\n```py\r\ndynamic_static_categorical = [ [0, 2, 555], [23, 5, 66], ... [33, 4, 54]]\r\n```\r\n\r\nwhere we have a list of categories for each time point where the len of this array will be the length of the target values array in the time dim.\r\n\r\nNext we will need to specify the number of dynamic cat. features (3) in the example above and the cardinalities and dims of the corresponding features:\r\n\r\n```\r\ndynamic_cat_card = [50, 10, 1000]\r\ndynamic_cat_dimns = [12, 16, 32]\r\n```\r\n\r\nOnce we have that done on the config side we can just add a corresponding `nn.Embedding` and concat the outputs to the input vector. If you open a PR please CC me and then i can help out! \r\n\r\nThank you!\r\n",
"@kashif I have created a pull request https://github.com/huggingface/transformers/pull/24712\r\nStill need to test it first, but I wanted you to have a look"
] | 1,688 | 1,688 | null |
NONE
| null |
### Feature request
I would like to have a Dynamic Categorical Feature Embedding option in TimeSeriesTransformerConfig
### Motivation
I didn't see any option in the TimeSeriesTransformerConfig where I could define an embedding of a Dynamic Categorical Feature. I'm working with sales data and holiday is an important element of sales, so all of my models handle the holidays with a dynamic embedding. Is it the case in Time Series Transformer too, and I'm just missing something?
### Your contribution
Happy to help, but would need some guidance on how it's handled currently.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24695/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24695/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/24694
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24694/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24694/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24694/events
|
https://github.com/huggingface/transformers/issues/24694
| 1,791,898,375 |
I_kwDOCUB6oc5qzjMH
| 24,694 |
Make correct padding for text generation with GPT-NEO
|
{
"login": "junoriosity",
"id": 5286536,
"node_id": "MDQ6VXNlcjUyODY1MzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5286536?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/junoriosity",
"html_url": "https://github.com/junoriosity",
"followers_url": "https://api.github.com/users/junoriosity/followers",
"following_url": "https://api.github.com/users/junoriosity/following{/other_user}",
"gists_url": "https://api.github.com/users/junoriosity/gists{/gist_id}",
"starred_url": "https://api.github.com/users/junoriosity/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/junoriosity/subscriptions",
"organizations_url": "https://api.github.com/users/junoriosity/orgs",
"repos_url": "https://api.github.com/users/junoriosity/repos",
"events_url": "https://api.github.com/users/junoriosity/events{/privacy}",
"received_events_url": "https://api.github.com/users/junoriosity/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@mzamini92 Many thanks for getting back to me.\r\n\r\nI know that the padding tokens should be ignored, when doing the generation. (However, it will be important for batch processing if there are multiple inputs).\r\n\r\nWhat is strange is that, if I follow the approach with different models the output makes sense for both approaches, yet here the second approach is not working for gpt-neo-125m.",
"Hey @junoriosity 👋 \r\n\r\n> if I follow the approach with different models the output makes sense for both approaches, yet here the second approach is not working for gpt-neo-125m.\r\n\r\nMasking is not a perfect operation, as it adds a very large negative number to the attention scores. While the impact of masked tokens is very very small, it still exists. In some cases, it may change an output token at generation time, which may derail (or improve!) the generation process.\r\n\r\nHave a go with other prompts and other model sizes for `GPTNeo`. Unless this phenomenon consistently happens, I'd say there is nothing to worry about :)",
"Hey @gante \r\n\r\nI tried the second approach with `EleutherAI/gpt-neo-1.3B` and got\r\n\r\n```\r\nHello, my dog is cute dog is cute is cute is cute is cute is\r\n```\r\n\r\nso no improvement ...",
"@junoriosity we would need a much larger sample size to conclude it is not working correctly. And we can only afford to look deeper into the issue after we confirm that it is indeed an issue :)",
"Hi @junoriosity \r\nIs there any reason to force the padding_side to be `left` ? removing the lines \r\n\r\n```python\r\ntokenizer.padding_side = 'left'\r\ntokenizer.truncation_side = 'left'\r\n```\r\n\r\nLeads to \"better\" output (the default `padding_side` is `right` for that model):\r\n```bash\r\n>>> Hello, my dog is cute a little dog. He is so cute cute cute\r\n>>> Hello, my dog is cuteh! She, and I have been in a\r\n```\r\n\r\nMaybe there is something wrong in the way we compute the position ids. Consider the case where `padding_side=left` and assume your text has 10 tokens and you want to add padding tokens on the first 20 tokens. \r\n\r\nCurrently:\r\n\r\n```python\r\n if position_ids is None:\r\n position_ids = torch.arange(past_length, input_shape[-1] + past_length, dtype=torch.long, device=device)\r\n position_ids = position_ids.unsqueeze(0).view(-1, input_shape[-1])\r\n```\r\n\r\nThe position ids will be computed as such regardless the index of the first non-padding token. Maybe this is the culprit @gante - similarly as https://github.com/huggingface/transformers/pull/22382",
"Hi @younesbelkada \r\n\r\neven then there are repetitions etc. \r\n\r\nI tried the same feat with the smallest OPT-125m model and it worked like charm, also for others. The only model that does cause me trouble with that approach is gpt-neo. \r\n\r\nI use this padding-right strategy to align a batch of sequences to the right. I thought this makes most sense. So far at least it works quite nicely for all other models.\r\n\r\n@gante 1.3 B is already the second largest, the largest being 2.7 B. Hence, there is no way to get much beyond that and I also doubt that just doubling the size will change much.",
"@younesbelkada oh definitely, the position ids should be computed from the attention mask in `prepare_inputs_for_generation` (like in the PR you linked)! That could be the cause for the mismatches",
"@gante @younesbelkada Okay, since I here from these things for the first time, could you\r\n\r\n- tell me what it means\r\n- how I could use it to solve the issue?",
"@junoriosity it appears there is no issue at all at the end. GPTNeo seems to already support the creation of correct position ids [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L682-L700)\r\n\r\nif you modify your script as follows:\r\n\r\n```python\r\nfrom transformers import AutoTokenizer, GPTNeoForCausalLM\r\nimport torch\r\nfrom torch.nn import functional as F\r\n\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"EleutherAI/gpt-neo-125m\")\r\nmodel = GPTNeoForCausalLM.from_pretrained(\"EleutherAI/gpt-neo-125m\")\r\n\r\ninputs = tokenizer(\"Hello, my dog is cute\", return_tensors=\"pt\")\r\n\r\ntokenizer.pad_token = tokenizer.eos_token\r\ntokenizer.padding_side = 'left'\r\ntokenizer.truncation_side = 'left'\r\nno_items_for_history = 30\r\n\r\ninputs = tokenizer.encode_plus(\"Hello, my dog is cute\", max_length=no_items_for_history, padding='max_length', truncation=True, return_tensors=\"pt\")\r\n\r\ninput_ids = inputs['input_ids']\r\nattention_mask = inputs['attention_mask']\r\n\r\noutputs = model.generate(input_ids=input_ids, attention_mask=attention_mask, max_new_tokens=40)\r\nprint(tokenizer.decode(outputs[0], skip_special_tokens=True))\r\n>>> Hello, my dog is cute and I'm going to give you some tips on how to get your dog to sleep.\r\n\r\nI'm going to give you some tips on how to get your dog to sleep.\r\n```\r\nNow in your case you need to properly call `prepare_inputs_for_generation` as @gante suggested to create the correct position ids and pass it to your model during forward pass. Let me get back to you with the updated script and explanation",
"@younesbelkada You are right, that this is a very elegant solution. :)\n\nHowever, I would like this \"step-by-step\" solution to extract some information about the state of the generation.\n\nHence, is there any possibilty to do it that way? Again, for other models like OPT-125m it was no problem as well.",
"Hi @junoriosity \r\n\r\nGoing back to the explanation; let's first try to understand what is the purpose of `position_ids`. These ids indicates the model the positional information of the input tokens. This information is extremely important for the model to capture the positional information of the input tokens. Here: https://github.com/huggingface/transformers/blob/abaca9f9432a84cfaa95531de4c72334f38a42f2/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L582 the model extracts the so called \"positional embeddings\" that are added later on together with the input embeddings, to produce the first hidden states here: https://github.com/huggingface/transformers/blob/abaca9f9432a84cfaa95531de4c72334f38a42f2/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L583\r\n\r\nAs you can see above, if no `position_ids` is passed to the model, it will create a new one: https://github.com/huggingface/transformers/blob/abaca9f9432a84cfaa95531de4c72334f38a42f2/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L551 with the `torch.arange(xxx)` method. \r\n\r\nBlindly creating position ids like that can lead to silent bugs (as described on your issue) - for a classic input (no padding involved) there is no problem at all using `torch.arange(xxx)` as there is no special token we want the model to ignore during its forward pass.\r\n\r\nNow assume your input is (consider `[PAD]` as the padding token produced by the tokenizer):\r\n```python\r\n\"[PAD] [PAD] Hello my name is\"\r\n``` \r\n\r\nTherefore the (dummy) input ids would look like (assuming `0` is the pad token id):\r\n\r\n```python\r\n[ 0 , 0, 45, 32, 2, 86, ..]\r\n```\r\n\r\nIf the position_ids are blindly created, it will result in the following :\r\n\r\n```python\r\ntorch.Tensor([0, 1, 2, 3, 4, 5])\r\n```\r\n\r\nthis is not correct and leads to wrong computation, the attention mask will ignore the first two tokens, however, the first non-padding token will have a positional ID of `2`, which in fact should be 0 - therefore one always needs to calculate separately the position ids before each generation step to handle corner cases such as the one your are facing.\r\n\r\n`.generate()` API does everything for you under the hood. Before each forward pass, it calls a method called `prepare_inputs_for_generation` that ideally handles all these scenarios: https://github.com/huggingface/transformers/blob/abaca9f9432a84cfaa95531de4c72334f38a42f2/src/transformers/generation/utils.py#L2359 including correctly shifting the position ids if any.\r\n\r\nGoing back to your case, the fix is to prepare the model's input before the generation step 1, then at each generation step iteratively call `model.prepare_inputs_for_generation()` with the correct arguments and correctly pass the produced `position_ids`\r\n\r\nChanging the script to the one below:\r\n\r\n<details><summary>Working script</summary>\r\n\r\n```python\r\nfrom transformers import AutoTokenizer, GPTNeoForCausalLM\r\nimport torch\r\nfrom torch.nn import functional as F\r\n\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"EleutherAI/gpt-neo-125m\")\r\nmodel = GPTNeoForCausalLM.from_pretrained(\"EleutherAI/gpt-neo-125m\")\r\n\r\ninputs = tokenizer(\"Hello, my dog is cute\", return_tensors=\"pt\")\r\n\r\ntokenizer.pad_token = tokenizer.eos_token\r\ntokenizer.padding_side = 'left'\r\ntokenizer.truncation_side = 'left'\r\nno_items_for_history = 30\r\n\r\ninputs = tokenizer.encode_plus(\"Hello, my dog is cute\", max_length=no_items_for_history, padding='max_length', truncation=True, return_tensors=\"pt\")\r\n\r\ninput_ids = inputs['input_ids']\r\nattention_mask = inputs['attention_mask']\r\n\r\nposition_ids = model.prepare_inputs_for_generation(input_ids, attention_mask=attention_mask, past_key_values=None, position_ids=None)[\"position_ids\"]\r\n\r\nfor i in range(50):\r\n if i == 0:\r\n outputs = model(input_ids=input_ids, attention_mask=attention_mask, position_ids=position_ids)\r\n past_key_values = None\r\n else:\r\n outputs = model(**next_stage_input)\r\n loss = outputs.loss\r\n logits = outputs.logits[:, -1, :]\r\n\r\n logits = F.softmax(logits, dim=1)\r\n\r\n topk_values, topk_indices = torch.topk(logits, 5)\r\n inputs_in_topk = torch.multinomial(topk_values, num_samples=1, replacement=True)\r\n new_input_ids = torch.gather(topk_indices, 1, inputs_in_topk)\r\n\r\n past_key_values = outputs.past_key_values\r\n attention_mask = torch.concat((attention_mask, torch.ones(1, 1).to(attention_mask.device)), dim=1)\r\n input_ids = torch.concat((input_ids, new_input_ids), dim=1)\r\n\r\n next_stage_input = model.prepare_inputs_for_generation(input_ids, attention_mask=attention_mask, past_key_values=None, position_ids=None)\r\n\r\nprint(tokenizer.decode(input_ids.tolist()[0], skip_special_tokens=True))\r\n```\r\n</details>\r\n\r\nseems to produce correct output. Let us know with @gante if you have more questions\r\n\r\nThe reason it works correctly for OPT is that the positional embeddings are computed directly using the attention mask, which indicates where is the first non-padding token: https://github.com/huggingface/transformers/blob/abaca9f9432a84cfaa95531de4c72334f38a42f2/src/transformers/models/opt/modeling_opt.py#L653\r\n\r\nAlso make sure to use the latest version of `transformers`:\r\n\r\n```bash\r\npip install --upgrade transformers\r\n```",
"@younesbelkada @gante Wow, this makes things a lot better by now. 🤗\r\n\r\nHowever, please correct me if I am wrong, but we do not use `past_key_values`, which will force us to do an enormous amount of calculation again and again.\r\n\r\nI tried some things to make use of it, but I did not succeed. Do you have an idea how to make the above code work while using `past_key_values` for speeding up the code?",
"@younesbelkada @gante I found how to solve it, you have to enter `position_id` into model, but this becomes \r\n```\r\nposition_ids = position_ids[:, -1:] + 1\r\n```\r\nThen things work like a charm.\r\n\r\nIn any case, many thanks for all your support. Without your effort, this progress wouldn't have been possible. 🤗",
"That's great to hear ! Thanks very much @junoriosity ",
"Also I believe we should support this, same way as it was done [here](https://github.com/raghavanone/transformers/commit/3d42b725fe357d45fe4f745e1bf700a09f06c1cc). I'll open a PR for both as the changes were reverted because TF version were not updated! I'll take care of it 😉 ",
"@ArthurZucker That is awesome. Just out of curiosity: How long do you think it will take until a new `transformer` version with the changes is realeased? :)",
"Oups as @younes mentioned, the automatic creation of position ids seems to be correct for GPTNeo (not for GPT2). \r\nTLDR; position ids should be created and correctly support past key values and use of use_cache. If this is not, the case then should fix it! ",
"@ArthurZucker Okay, could you perhaps outline with an example how you mean it? I am a bit lost due to my lack of experience for the specific requirements.",
"Hi @ArthurZucker could you perhaps get back to me on that matter? 🤗\r\n\r\nPersonally, I would appreciate it a lot if I could handle GPTNeo just like the OPT models, as this would facilitate my life a lot.",
"Hey! really sorry had a bit of a sprint this week! Will get back to you soon 😃 ",
"Hey @junoriosity, what I meant is that in this case, the problem does not seem to be from transformers:\r\n\r\nhttps://github.com/huggingface/transformers/blob/f092997ca669750d4f32ada127b2624bd450aee5/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L682-L707 \r\n\r\n\r\nIn the above snippet, we can see that the attention mask is correctly taken into account to create the positional ids. I just needed to check whether this was the case or not.\r\n\r\nHope this answers your final questions! ",
"@ArthurZucker So long story short:\n\nI can proceed without position_ids for `OPT`, but have to implement it for `GPT-NEO`.\n\nIs that a correct summary?",
"If what you are doing is:\r\n\r\n> Then for both approaches, I iteratively loop through everything in order generate the sequence on token at a time.\r\n\r\nthen you just need to make sure your code is adapted yes. \r\nThe other solution is for us to include the correct position id creation in the Model class instead of when `prepare_inputs_for_generation`. This might be better, wdyt @gante (gpt2 also needs this as it was reverted since tf does not use it) ",
"@junoriosity @ArthurZucker I'd favor adding it in `prepare_inputs_for_generation`. \r\n\r\n(Adding it in the model class is more elegant, but most models do it in `prepare_inputs_for_generation`. Keeping the same structure makes maintenance easier :) )",
"@gante Terrific, I am always a bit impatient, but is there a realistic time range until when this would be included in the library? 🤗",
"@junoriosity our bandwidth to retroactively add features/fix bugs runs short for the foreseeable months. My suggestion would be to have a go at it and open a PR :) ",
"Sure, but for training could this not be a problem? 😉 ",
"cc @gante WDYT about making sure this is supported when calling a `model` this way training correctly computes `positions_ids`? Otherwise they are just wrong and we don't raise anything -> silent bug. ",
"The catch is that in some common cases, we can't infer the position IDs at train time -- e.g. if we concatenate multiple rows to maximize utilization (as in [this example](https://huggingface.co/docs/transformers/tasks/language_modeling#preprocess)). \r\n\r\nIMO, other than automating the inference-time case for ease of use, I don't see how we can do it reliably 😞 ",
"@ArthurZucker @gante I have encountered a new issue and I do not know how so solve it, so perhaps you can help with your in-depth knowledge. 🤗\r\n\r\nWhen I use the following `inputs_ids` \r\n\r\n```\r\ntensor([[50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256,\r\n 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256,\r\n 50256, 50256, 50256, 1532, 356, 1949],\r\n [50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256,\r\n 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256,\r\n 50256, 50256, 50256, 50256, 50256, 1212]])\r\n```\r\n\r\nand `attention_mask` \r\n\r\n```\r\ntensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1],\r\n [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1]])\r\n```\r\n\r\nI can use as `position_ids` \r\n\r\n```\r\ntensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2],\r\n [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])\r\n```\r\n\r\nand then have success with the iterated approach as outlined above, i.e.,\r\n\r\n```\r\nmodel(input_ids=input_ids, attention_mask=attention_mask, position_ids=position_ids, past_key_values=None)\r\n```\r\n\r\nthen I can use the iteration as outlined above with the `past_key_values` from the model. The result is sensible output like\r\n\r\n\r\n```\r\n[1] to get into the new programming language of the past, we have a lot of\r\n[2] is a new idea for the idea for the idea about the idea about the idea\r\n```\r\n\r\nHowever, if I crop it further (and such things might happen in our use case), then the different variables look as follows:\r\n\r\n`inputs_ids` \r\n\r\n```\r\ntensor([[50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256,\r\n 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256,\r\n 50256, 50256, 50256, 1532, 356],\r\n [50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256,\r\n 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256,\r\n 50256, 50256, 50256, 50256, 50256]])\r\n```\r\n\r\nand `attention_mask` \r\n\r\n```\r\ntensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1],\r\n [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])\r\n```\r\n\r\nand `position_ids` \r\n\r\n```\r\ntensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],\r\n [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])\r\n```\r\n\r\nIf I use the iterated approach as outlined above, i.e.,\r\n\r\n```\r\nmodel(input_ids=input_ids, attention_mask=attention_mask, position_ids=position_ids, past_key_values=None)\r\n```\r\n\r\nwe can use `past_key_values` from the model for each new round, exactly analogous to the other input. However, the result is not a sensible output and looks like\r\n\r\n\r\n```\r\n[1] try to get into the new programming language, we’ve got to make sure\r\n[2] This is a new idea for the idea idea idea idea idea idea idea idea idea idea\r\n```\r\n\r\nHence, the first one is still working, but for the second, where the `attention_mask` and `position_ids` were both completely 0, it just produces nothing sensible anymore.\r\n\r\nDo you have an idea how I could still make it work? 🤗\r\n\r\n"
] | 1,688 | 1,697 | 1,697 |
NONE
| null |
### System Info
- `transformers` version: 4.28.1
- Platform: macOS-13.2.1-x86_64-i386-64bit
- Python version: 3.10.8
- Huggingface_hub version: 0.13.4
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker @younesbelkada @gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
In order to make generate text sequences with `GPT-NEO`, I first load all the relevant components for sequence generation for `GPTNeoForCausalLM`.
```
from transformers import AutoTokenizer, GPTNeoForCausalLM
import torch
from torch.nn import functional as F
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-125m")
model = GPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-125m")
```
There are two ways how I can generate `input_ids` and `attention_mask`.
1. I take the standard approach without padding
```
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
```
2. I use padding instead
```
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = 'left'
tokenizer.truncation_side = 'left'
no_items_for_history = 30
inputs = tokenizer.encode_plus("Hello, my dog is cute", max_length=no_items_for_history, padding='max_length', truncation=True, return_tensors="pt")
```
Then for both approaches, I iteratively loop through everything in order generate the sequence on token at a time.
```
input_ids = inputs['input_ids']
attention_mask = inputs['attention_mask']
for i in range(10):
if i == 0:
outputs = model(input_ids=input_ids, attention_mask=attention_mask, labels=inputs["input_ids"])
else:
outputs = model(input_ids=new_input_ids, attention_mask=attention_mask, past_key_values=past_key_values)
loss = outputs.loss
logits = outputs.logits[:, -1, :]
logits = F.softmax(logits, dim=1)
topk_values, topk_indices = torch.topk(logits, 5)
inputs_in_topk = torch.multinomial(topk_values, num_samples=1, replacement=True)
new_input_ids = torch.gather(topk_indices, 1, inputs_in_topk)
past_key_values = outputs.past_key_values
attention_mask = torch.concat((attention_mask, torch.ones(1, 1).to(attention_mask.device)), dim=1)
input_ids = torch.concat((input_ids, new_input_ids), dim=1)
print(tokenizer.decode(input_ids.tolist()[0], skip_special_tokens=True))
```
### Expected behavior
**Here is the problem:**
The starting `input_ids` and `attention_mask` for the first approach look like:
```
input_ids = tensor([[15496, 11, 616, 3290, 318, 13779]])
attention_mask = tensor([[1, 1, 1, 1, 1, 1]])
```
The output looks very sensible:
```
Hello, my dog is cute! This post is about dogs and cats
```
However, for the second approach the starting `input_ids` and `attention_mask` look like
```
input_ids = tensor([[50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 15496, 11, 616, 3290, 318, 13779]])
attention_mask = tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1]])
```
and it always generates nonsense like
```
Hello, my dog is cute pet is my pet pet pet is my dog is
```
**Question:** Do you know how to make it work with padding, i.e., the second approach?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24694/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24694/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24693
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24693/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24693/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24693/events
|
https://github.com/huggingface/transformers/issues/24693
| 1,791,897,451 |
I_kwDOCUB6oc5qzi9r
| 24,693 |
TF : tensor mismatch error in training with opus100 and t5-small
|
{
"login": "SoyGema",
"id": 24204714,
"node_id": "MDQ6VXNlcjI0MjA0NzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/24204714?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SoyGema",
"html_url": "https://github.com/SoyGema",
"followers_url": "https://api.github.com/users/SoyGema/followers",
"following_url": "https://api.github.com/users/SoyGema/following{/other_user}",
"gists_url": "https://api.github.com/users/SoyGema/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SoyGema/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SoyGema/subscriptions",
"organizations_url": "https://api.github.com/users/SoyGema/orgs",
"repos_url": "https://api.github.com/users/SoyGema/repos",
"events_url": "https://api.github.com/users/SoyGema/events{/privacy}",
"received_events_url": "https://api.github.com/users/SoyGema/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This looks like a dataset issue, which is not in the scope of `transformers` GitHub pages.\r\n\r\nHowever, if you can provide a full log error + the content of `train_model.py`, we might be able to have a quick look.",
"Hello there @ydshieh . Thanks for your time 🙏🙏\r\nYou can find full script [here](https://github.com/SoyGema/The-Lord-of-The-Words-The-two-frameworks/blob/main/src/models/train_model.py) \r\n\r\nFull Log\r\n\r\n```\r\n07/06/2023 17:59:34 - INFO - __main__ - Training/evaluation parameters TFTrainingArguments(\r\n_n_gpu=-1,\r\nadafactor=False,\r\nadam_beta1=0.9,\r\nadam_beta2=0.999,\r\nadam_epsilon=1e-08,\r\nauto_find_batch_size=False,\r\nbf16=False,\r\nbf16_full_eval=False,\r\ndata_seed=None,\r\ndataloader_drop_last=False,\r\ndataloader_num_workers=0,\r\ndataloader_pin_memory=True,\r\nddp_backend=None,\r\nddp_broadcast_buffers=None,\r\nddp_bucket_cap_mb=None,\r\nddp_find_unused_parameters=None,\r\nddp_timeout=1800,\r\ndebug=[],\r\ndeepspeed=None,\r\ndisable_tqdm=False,\r\ndo_eval=True,\r\ndo_predict=False,\r\ndo_train=True,\r\neval_accumulation_steps=None,\r\neval_delay=0,\r\neval_steps=None,\r\nevaluation_strategy=no,\r\nfp16=False,\r\nfp16_backend=auto,\r\nfp16_full_eval=False,\r\nfp16_opt_level=O1,\r\nfsdp=[],\r\nfsdp_config={'fsdp_min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False},\r\nfsdp_min_num_params=0,\r\nfsdp_transformer_layer_cls_to_wrap=None,\r\nfull_determinism=False,\r\ngcp_project=None,\r\ngradient_accumulation_steps=1,\r\ngradient_checkpointing=False,\r\ngreater_is_better=None,\r\ngroup_by_length=False,\r\nhalf_precision_backend=auto,\r\nhub_model_id=None,\r\nhub_private_repo=False,\r\nhub_strategy=every_save,\r\nhub_token=<HUB_TOKEN>,\r\nignore_data_skip=False,\r\ninclude_inputs_for_metrics=False,\r\njit_mode_eval=False,\r\nlabel_names=None,\r\nlabel_smoothing_factor=0.0,\r\nlearning_rate=5e-05,\r\nlength_column_name=length,\r\nload_best_model_at_end=False,\r\nlocal_rank=-1,\r\nlog_level=passive,\r\nlog_level_replica=warning,\r\nlog_on_each_node=True,\r\nlogging_dir=/tmp/tst-translation/runs/Jul06_17-59-34_mbp-de-gema.lan,\r\nlogging_first_step=False,\r\nlogging_nan_inf_filter=True,\r\nlogging_steps=500,\r\nlogging_strategy=steps,\r\nlr_scheduler_type=linear,\r\nmax_grad_norm=1.0,\r\nmax_steps=-1,\r\nmetric_for_best_model=None,\r\nmp_parameters=,\r\nno_cuda=False,\r\nnum_train_epochs=3.0,\r\noptim=adamw_hf,\r\noptim_args=None,\r\noutput_dir=/tmp/tst-translation,\r\noverwrite_output_dir=True,\r\npast_index=-1,\r\nper_device_eval_batch_size=16,\r\nper_device_train_batch_size=16,\r\npoly_power=1.0,\r\nprediction_loss_only=False,\r\npush_to_hub=False,\r\npush_to_hub_model_id=None,\r\npush_to_hub_organization=None,\r\npush_to_hub_token=<PUSH_TO_HUB_TOKEN>,\r\nray_scope=last,\r\nremove_unused_columns=True,\r\nreport_to=['mlflow', 'tensorboard'],\r\nresume_from_checkpoint=None,\r\nrun_name=/tmp/tst-translation,\r\nsave_on_each_node=False,\r\nsave_safetensors=False,\r\nsave_steps=500,\r\nsave_strategy=steps,\r\nsave_total_limit=None,\r\nseed=42,\r\nsharded_ddp=[],\r\nskip_memory_metrics=True,\r\ntf32=None,\r\ntorch_compile=False,\r\ntorch_compile_backend=None,\r\ntorch_compile_mode=None,\r\ntorchdynamo=None,\r\ntpu_metrics_debug=False,\r\ntpu_name=None,\r\ntpu_num_cores=None,\r\ntpu_zone=None,\r\nuse_ipex=False,\r\nuse_legacy_prediction_loop=False,\r\nuse_mps_device=False,\r\nwarmup_ratio=0.0,\r\nwarmup_steps=0,\r\nweight_decay=0.0,\r\nxla=False,\r\nxpu_backend=None,\r\n)\r\n07/06/2023 17:59:35 - INFO - datasets.info - Loading Dataset Infos from /Users/gema/.cache/huggingface/modules/datasets_modules/datasets/opus100/256f3196b69901fb0c79810ef468e2c4ed84fbd563719920b1ff1fdc750f7704\r\n07/06/2023 17:59:35 - INFO - datasets.builder - Overwrite dataset info from restored data version if exists.\r\n07/06/2023 17:59:35 - INFO - datasets.info - Loading Dataset info from /Users/gema/.cache/huggingface/datasets/opus100/en-ro/0.0.0/256f3196b69901fb0c79810ef468e2c4ed84fbd563719920b1ff1fdc750f7704\r\n07/06/2023 17:59:35 - WARNING - datasets.builder - Found cached dataset opus100 (/Users/gema/.cache/huggingface/datasets/opus100/en-ro/0.0.0/256f3196b69901fb0c79810ef468e2c4ed84fbd563719920b1ff1fdc750f7704)\r\n07/06/2023 17:59:35 - INFO - datasets.info - Loading Dataset info from /Users/gema/.cache/huggingface/datasets/opus100/en-ro/0.0.0/256f3196b69901fb0c79810ef468e2c4ed84fbd563719920b1ff1fdc750f7704\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 33.24it/s]\r\nloading configuration file t5-small/config.json\r\nModel config T5Config {\r\n \"_name_or_path\": \"t5-small\",\r\n \"architectures\": [\r\n \"T5ForConditionalGeneration\"\r\n ],\r\n \"d_ff\": 2048,\r\n \"d_kv\": 64,\r\n \"d_model\": 512,\r\n \"decoder_start_token_id\": 0,\r\n \"dense_act_fn\": \"relu\",\r\n \"dropout_rate\": 0.1,\r\n \"eos_token_id\": 1,\r\n \"feed_forward_proj\": \"relu\",\r\n \"initializer_factor\": 1.0,\r\n \"is_encoder_decoder\": true,\r\n \"is_gated_act\": false,\r\n \"layer_norm_epsilon\": 1e-06,\r\n \"model_type\": \"t5\",\r\n \"n_positions\": 512,\r\n \"num_decoder_layers\": 6,\r\n \"num_heads\": 8,\r\n \"num_layers\": 6,\r\n \"output_past\": true,\r\n \"pad_token_id\": 0,\r\n \"relative_attention_max_distance\": 128,\r\n \"relative_attention_num_buckets\": 32,\r\n \"task_specific_params\": {\r\n \"summarization\": {\r\n \"early_stopping\": true,\r\n \"length_penalty\": 2.0,\r\n \"max_length\": 200,\r\n \"min_length\": 30,\r\n \"no_repeat_ngram_size\": 3,\r\n \"num_beams\": 4,\r\n \"prefix\": \"summarize: \"\r\n },\r\n \"translation_en_to_de\": {\r\n \"early_stopping\": true,\r\n \"max_length\": 300,\r\n \"num_beams\": 4,\r\n \"prefix\": \"translate English to German: \"\r\n },\r\n \"translation_en_to_fr\": {\r\n \"early_stopping\": true,\r\n \"max_length\": 300,\r\n \"num_beams\": 4,\r\n \"prefix\": \"translate English to French: \"\r\n },\r\n \"translation_en_to_pt\": {\r\n \"early_stopping\": true,\r\n \"max_length\": 300,\r\n \"num_beams\": 4,\r\n \"prefix\": \"translate English to Portuguese: \"\r\n },\r\n \"translation_en_to_ro\": {\r\n \"early_stopping\": true,\r\n \"max_length\": 300,\r\n \"num_beams\": 4,\r\n \"prefix\": \"translate English to Romanian: \"\r\n }\r\n },\r\n \"transformers_version\": \"4.31.0.dev0\",\r\n \"use_cache\": true,\r\n \"vocab_size\": 32128\r\n}\r\n\r\nloading file spiece.model\r\nloading file tokenizer.json\r\nloading file added_tokens.json\r\nloading file special_tokens_map.json\r\nloading file tokenizer_config.json\r\n07/06/2023 17:59:36 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /Users/gema/.cache/huggingface/datasets/opus100/en-ro/0.0.0/256f3196b69901fb0c79810ef468e2c4ed84fbd563719920b1ff1fdc750f7704/cache-107d5d31727344a2.arrow\r\nRunning tokenizer on validation dataset: 0%| | 0/2000 [00:00<?, ? examples/s]07/06/2023 17:59:36 - INFO - datasets.arrow_dataset - Caching processed dataset at /Users/gema/.cache/huggingface/datasets/opus100/en-ro/0.0.0/256f3196b69901fb0c79810ef468e2c4ed84fbd563719920b1ff1fdc750f7704/cache-e8cb6f4c7ff7ad3e.arrow\r\nTensorflow: setting up strategy \r\nloading weights file t5-small/model.safetensors\r\nGenerate config GenerationConfig {\r\n \"_from_model_config\": true,\r\n \"decoder_start_token_id\": 0,\r\n \"eos_token_id\": 1,\r\n \"pad_token_id\": 0,\r\n \"transformers_version\": \"4.31.0.dev0\"\r\n}\r\n\r\nLoaded 60,506,624 parameters in the TF 2.0 model.\r\nAll PyTorch model weights were used when initializing TFT5ForConditionalGeneration.\r\n\r\nAll the weights of TFT5ForConditionalGeneration were initialized from the PyTorch model.\r\nIf your task is similar to the task the model of the checkpoint was trained on, you can already use TFT5ForConditionalGeneration for predictions without further training.\r\nYou're using a T5TokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\r\nNo loss specified in compile() - the model's internal loss computation will be used as the loss. Don't panic - this is a common way to train TensorFlow models in Transformers! To disable this behaviour please pass a loss argument, or explicitly pass `loss=None` if you do not want your model to compute a loss. You can also specify `loss='auto'` to get the internal loss without printing this info string.\r\n07/06/2023 17:59:38 - INFO - __main__ - ***** Running training *****\r\n07/06/2023 17:59:38 - INFO - __main__ - Num examples = 1000000\r\n07/06/2023 17:59:38 - INFO - __main__ - Num Epochs = 3.0\r\n07/06/2023 17:59:38 - INFO - __main__ - Instantaneous batch size per device = 16\r\n07/06/2023 17:59:38 - INFO - __main__ - Total train batch size = 16\r\n07/06/2023 17:59:38 - INFO - __main__ - Total optimization steps = 187500\r\n2023-07-06 17:59:38.328410: W tensorflow/core/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz\r\n2023-07-06 17:59:38.353957: W tensorflow/core/framework/dataset.cc:769] Input of GeneratorDatasetOp::Dataset will not be optimized because the dataset does not implement the AsGraphDefInternal() method needed to apply optimizations.\r\nEpoch 1/3\r\n 18/62500 [..............................] - ETA: 21:26:35 - loss: 2.2246Traceback (most recent call last):\r\n File \"/Users/gema/Documents/The-Lord-of-The-Words-The-two-frameworks/src/models/train_model.py\", line 730, in <module>\r\n main()\r\n File \"/Users/gema/Documents/The-Lord-of-The-Words-The-two-frameworks/src/models/train_model.py\", line 683, in main\r\n history = model.fit(tf_train_dataset, epochs=int(training_args.num_train_epochs), callbacks=callbacks)\r\n File \"/Users/gema/miniforge3/lib/python3.9/site-packages/keras/utils/traceback_utils.py\", line 70, in error_handler\r\n raise e.with_traceback(filtered_tb) from None\r\n File \"/Users/gema/miniforge3/lib/python3.9/site-packages/tensorflow/python/eager/execute.py\", line 54, in quick_execute\r\n tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,\r\ntensorflow.python.framework.errors_impl.InvalidArgumentError: Graph execution error:\r\n\r\nShape of tensor args_0 [16,128] is not compatible with expected shape [16,64].\r\n [[{{node EnsureShape_1}}]]\r\n [[MultiDeviceIteratorGetNextFromShard]]\r\n [[RemoteCall]]\r\n [[IteratorGetNext]] [Op:__inference_train_function_17297]\r\n```\r\n\r\n\r\nFor the future, I will go with the tailored example for the [forum](https://discuss.huggingface.co/) and maybe shall be redirected there. Let me know if at some point this is a suitable issue for [datasets](https://github.com/huggingface/datasets) in this case. 🧭🗺️\r\nThanks for the time dedicated to this, I really appreciate it, and my apologies for the inconvenience.",
"@Rocketknight1 \r\n\r\nDo you know why\r\n\r\n```python\r\n if \"cols_to_retain\" in list(inspect.signature(dataset._get_output_signature).parameters.keys()):\r\n output_signature, _ = dataset._get_output_signature(\r\n dataset,\r\n batch_size=None,\r\n collate_fn=collate_fn,\r\n collate_fn_args=collate_fn_args,\r\n cols_to_retain=model_inputs,\r\n )\r\n```\r\ngives `output_signature`\r\n```\r\n{'input_ids': TensorSpec(shape=(None, None), dtype=tf.int64, name=None), 'attention_mask': TensorSpec(shape=(None, None), dtype=tf.int64, name=None), 'labels': TensorSpec(shape=(None, 64), dtype=tf.int64, name=None), 'decoder_input_ids': TensorSpec(shape=(None, 64), dtype=tf.int64, name=None)}\r\n```\r\nwhich has a fixed sequence length `64` in `labels` and `decoder_input_ids`?\r\n\r\nFYI: the sequences in `dataset` have different lengths in each element.",
"@ydshieh We actually generate those shapes empirically by grabbing several batches from the dataset, which is not ideal but usually works. Do almost all samples from the dataset have a post-padding decoder_input_ids length of 64, but some don't? That might trigger this issue. If that turns out to be the case, let me know - I've been wary of that code for a while, so this might be a good time to try a fix!",
"Hello there. Thanks again for keeping this issue open. 🙏\r\nManaged to solved the issue .\r\nIm putting it here before closing. Hopefully this can give some light to the question posted. \r\n\r\n#### 1. Script [train_model.py ](https://github.com/SoyGema/The-Lord-of-The-Words-The-two-frameworks/blob/main/src/models/train_model.py#L418)\r\nWhat I understand is that the `preprocess_function` , We call the [tokenizer](https://github.com/SoyGema/The-Lord-of-The-Words-The-two-frameworks/blob/de4a08eda2a2de2695f4e3ed12b571bdb3dc9a8f/src/models/train_model.py#L418), that is having the padding and the max length associated\r\n\r\n 1.a ) Initially what I did is set `max_source_length ` that fixes the length **after** tokenization to 64 . According to the docstring, larger sequences are _truncated_, and shorter are _padded_. IT TRAINS CORRECTLY . But then I thought that this could (please correct me if I'm wrong ) split the sequences when they are longer, therefore larger sentences could be cut, affecting to understanding context in translation in larger sentences. \r\n\r\n\r\n 2.b ) Then I discovered [`pad_to_max_length`](https://github.com/SoyGema/The-Lord-of-The-Words-The-two-frameworks/blob/de4a08eda2a2de2695f4e3ed12b571bdb3dc9a8f/src/models/train_model.py#L183) . What Im assuming here is that it pads taking into account the max sequence length, so I tried to set it to `True` and `max_target_length ` to `None` . IT SEEMS TO BE TRAINING CORRECTLY as well. What Im understanding here is that Im padding WRT the max length. \r\n\r\n\r\nCome what may, I gather to TRAIN the model with these two options. \r\nIf anyone wants to keep this conversation or clarify some wrong hypothesis I might have, please come by [#2](https://github.com/SoyGema/The-Lord-of-The-Words-The-two-frameworks/issues/2) 🙂 as I won´t consider proper to keep this issue here. 💗🤗\r\n\r\nThanks @ydshieh & @Rocketknight1 \r\n\r\n "
] | 1,688 | 1,689 | 1,689 |
CONTRIBUTOR
| null |
### System Info
`transformers ==4.31.0.dev0`
`tensorflow-macos==2.10.0`
Hello there! 👋
Thanks for creating examples for the Translation task!
## Context
Im going through run_translation.py example modified with [opus100](https://huggingface.co/datasets/opus100) dataset.
Launching the script with flags listed below.
```
python train_model.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--source_lang en \
--target_lang ro \
--source_prefix "translate English to Romanian: " \
--dataset_name opus100 \
--dataset_config_name en-ro \
--output_dir /tmp/tst-translation \
--per_device_train_batch_size=16 \
--per_device_eval_batch_size=16 \
--overwrite_output_dir
```
## Error
All dataset feature engineering seems to display well, It starts training but at some point, there is a **tensor mismatch** error in training.
```
Shape of tensor args_0 [16,128] is not compatible with expected shape [16,64].
[[{{node EnsureShape_1}}]]
[[MultiDeviceIteratorGetNextFromShard]]
[[RemoteCall]]
[[IteratorGetNext]] [Op:__inference_train_function_17297]
```
Any hints on how Shall I reshape this? At some point, I thought it was something with preprocessing, but it starts training, so a little bit confused... I also explored [wtm16](https://huggingface.co/datasets/wmt16) (example tested and working) during #24579 and when I go 2 the Hub, it seems to have the same structure and partitions as opus100.
Thanks for the time dedicated to this!🙂 and for the help!
Looking forward to get all this working, and share it in [PyCon Spain keynote](https://github.com/SoyGema/The-Lord-of-The-Words-The-two-frameworks#the-lord-of-the-words--the-two-frameworks) this year!
### Who can help?
@gante
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Launch training with config
```
python train_model.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--source_lang en \
--target_lang ro \
--source_prefix "translate English to Romanian: " \
--dataset_name opus100 \
--dataset_config_name en-ro \
--output_dir /tmp/tst-translation \
--per_device_train_batch_size=16 \
--per_device_eval_batch_size=16 \
--overwrite_output_dir
```
### Expected behavior
Training is not interrupted.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24693/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24693/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24692
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24692/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24692/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24692/events
|
https://github.com/huggingface/transformers/issues/24692
| 1,791,884,543 |
I_kwDOCUB6oc5qzfz_
| 24,692 |
Breaking change in upcoming PyTorch version for weight norm and loading pretrained models
|
{
"login": "MarktHart",
"id": 9414924,
"node_id": "MDQ6VXNlcjk0MTQ5MjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9414924?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MarktHart",
"html_url": "https://github.com/MarktHart",
"followers_url": "https://api.github.com/users/MarktHart/followers",
"following_url": "https://api.github.com/users/MarktHart/following{/other_user}",
"gists_url": "https://api.github.com/users/MarktHart/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MarktHart/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MarktHart/subscriptions",
"organizations_url": "https://api.github.com/users/MarktHart/orgs",
"repos_url": "https://api.github.com/users/MarktHart/repos",
"events_url": "https://api.github.com/users/MarktHart/events{/privacy}",
"received_events_url": "https://api.github.com/users/MarktHart/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi, what's your `transformers` version?\r\n\r\nIf you use a dev version with the commit of this PR #https://github.com/huggingface/transformers/pull/24030 included, it should be fine. Let me know if not, thanks."
] | 1,688 | 1,688 | 1,688 |
NONE
| null |
Probably to be fixed around here: https://github.com/huggingface/transformers/blob/bbf3090848cf0ceff98f9465691e9ecce63684a1/src/transformers/modeling_utils.py#L3016
See this issue on PyTorch:
https://github.com/pytorch/pytorch/issues/102999#issuecomment-1623975562
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24692/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24692/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24691
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24691/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24691/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24691/events
|
https://github.com/huggingface/transformers/pull/24691
| 1,791,867,195 |
PR_kwDOCUB6oc5U09Iy
| 24,691 |
Fix integration with Accelerate and failing test
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@muellerzr I'm not sure if it's this or the previous PR that is causing the issue, but it still seems to be hanging on moving the loss tensor to the cpu.",
"@winglian can you provide a reproducer?",
"I'll have to distill something down later, but I can confirm the issue happens on multi-gpu, but when using single gpu, it seems to work properly.",
"A repr will definitely be needed here, because so far at least all the official example scripts don't hang for me. (Though there was something with gradient accumulation fixed with https://github.com/huggingface/transformers/pull/24756). Ping me once you have that and I can take a look. (Though sooner is better so I can try and have it for the next release 😄 )"
] | 1,688 | 1,689 | 1,688 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR brings back the logic when gathering and calculating metrics, which was borked with https://github.com/huggingface/transformers/pull/24028. Proof is the fact that tests now pass that were failing related to the Trainer
Fixes # (issue)
Solves https://github.com/huggingface/transformers/issues/24391
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
As Sylvain is on vacation, cc @amyeroberts
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24691/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24691/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24691",
"html_url": "https://github.com/huggingface/transformers/pull/24691",
"diff_url": "https://github.com/huggingface/transformers/pull/24691.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24691.patch",
"merged_at": 1688667137000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24690
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24690/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24690/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24690/events
|
https://github.com/huggingface/transformers/pull/24690
| 1,791,627,676 |
PR_kwDOCUB6oc5U0JXE
| 24,690 |
[DO NOT MERGE] Test PR for studying #24622
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24690). All of your documentation changes will be reflected on that endpoint.",
"Thanks 🤗 "
] | 1,688 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24690/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24690/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24690",
"html_url": "https://github.com/huggingface/transformers/pull/24690",
"diff_url": "https://github.com/huggingface/transformers/pull/24690.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24690.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24689
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24689/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24689/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24689/events
|
https://github.com/huggingface/transformers/pull/24689
| 1,791,540,164 |
PR_kwDOCUB6oc5Uz2VJ
| 24,689 |
Avoid import `sentencepiece_model_pb2` in `utils.__init__.py`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,688 | 1,688 | 1,688 |
COLLABORATOR
| null |
# What does this PR do?
Otherwise, trying to import anything from `utils` will fail if protobuf is not installed.
More details in the comment.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24689/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24689/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24689",
"html_url": "https://github.com/huggingface/transformers/pull/24689",
"diff_url": "https://github.com/huggingface/transformers/pull/24689.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24689.patch",
"merged_at": 1688653823000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24688
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24688/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24688/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24688/events
|
https://github.com/huggingface/transformers/issues/24688
| 1,791,534,620 |
I_kwDOCUB6oc5qyKYc
| 24,688 |
is there any plan to add falcon to instructblip?
|
{
"login": "Ashwath-Shetty",
"id": 64685993,
"node_id": "MDQ6VXNlcjY0Njg1OTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/64685993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ashwath-Shetty",
"html_url": "https://github.com/Ashwath-Shetty",
"followers_url": "https://api.github.com/users/Ashwath-Shetty/followers",
"following_url": "https://api.github.com/users/Ashwath-Shetty/following{/other_user}",
"gists_url": "https://api.github.com/users/Ashwath-Shetty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ashwath-Shetty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ashwath-Shetty/subscriptions",
"organizations_url": "https://api.github.com/users/Ashwath-Shetty/orgs",
"repos_url": "https://api.github.com/users/Ashwath-Shetty/repos",
"events_url": "https://api.github.com/users/Ashwath-Shetty/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ashwath-Shetty/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @NielsRogge ",
"However, falcon port is still in WIP\r\n\r\n\r\nhttps://github.com/huggingface/transformers/pull/24523",
"thanks for replying @ydshieh , any idea when it'll come to the instructblip pipeline?, any tentative timeline for the same? \r\nalso, is there a way to add the falcon locally by our side to the pipeline in place of vicuna/flant-5?\r\nplan is to plugin this falcon (https://huggingface.co/tiiuae/falcon-40b) to the instructblip pipeline somehow & then we can choose modeltype=\"falcon\"\r\nmodel, vis_processors, _ = load_model_and_preprocess(name=\"blip2_vicuna_instruct\", model_type=\"vicuna7b\", is_eval=True, device=device)\r\n\r\n",
"@ydshieh Falcon is merged, but I'm not sure exactly how to add it to the instructblip pipeline, I don't find any weights for the same and we do have this #25789 as an alternative. ",
"Hi, I am not sure what `add falcon to the instructblip pipeline`. Could you elaborate this in more details?",
"Okay so I wasn't very sure how to make this work, but InstructBlip does take in the language model like this, right now it takes in LLaMA by default - \r\nhttps://github.com/shauray8/transformers/blob/eb8489971ac1415f67b0abdd1584fde8b659ced9/src/transformers/models/instructblip/modeling_instructblip.py#L1272-L1275",
"OK, I understand better now. Will check",
"Hi @Ashwath-Shetty @shauray8 Sorry for being so long to this issue.\r\n\r\nI believe you want to load the pretrained model (`vision_model`, `qformer` for example) from the one existing InstuctBlip checkpoint, but with `language_model` being from a `falcon` checkpoint.\r\n\r\nThere is not direct support for this however. What we can do is to\r\n- load an existing ` InstuctBlip` checkpoint , say `instruct_model = ...`\r\n- load an existing `Falcon` checkpoint, say, `falcon_model = ...`\r\n- then set `instruct_model.language_model = falcon_model`\r\n- we have to set also `instruct_model.config.text_config = falcon_model.config`\r\n\r\nBut if we do so, we have to make sure the `hidden_size` of the loaded falcon model is the same as `instruct_model .config.text_config.hidden_size`, otherwise we have to replace that component too.\r\n\r\nDon't hesitate if you need further guides.\r\n\r\ncc @NielsRogge to see if he has any further comment.\r\n\r\n\r\n",
"Yes technically you can instantiate an InstructBLIP model with any `AutoModelForCausalLM`, like so:\r\n```\r\nfrom transformers import FalconConfig, InstructBlipConfig, InstructBlipForConditionalGeneration\r\n\r\ntext_config = FalconConfig.from_pretrained(\"tiiuae/falcon-7b\")\r\nconfig = InstructBlipConfig(text_config=text_config)\r\n\r\nmodel = InstructBlipForConditionalGeneration(config)\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,702 | 1,702 |
NONE
| null |
### Model description
instructblip seems to be really cool, is there any possibility to add falcon also to the pipeline in the future?. currently we have option for flant5 & vicuna. problem is you cannot use vicuna for commercial use & flant5 performance is poor, also vicuna is not that great too, adding falcon to the pipeline will massively boost the performance of instructblip.
incase if i want to add it myself how can i do that? code base seems to be really heavy.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
https://huggingface.co/tiiuae
@NielsRogge @DanielHesslow @guipenedo @slippylolo @FalconLLM @mickbo32 @karnakar
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24688/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24688/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24687
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24687/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24687/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24687/events
|
https://github.com/huggingface/transformers/issues/24687
| 1,791,271,693 |
I_kwDOCUB6oc5qxKMN
| 24,687 |
OSError: Error no file named pytorch_model.bin
|
{
"login": "wangzff",
"id": 18679682,
"node_id": "MDQ6VXNlcjE4Njc5Njgy",
"avatar_url": "https://avatars.githubusercontent.com/u/18679682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wangzff",
"html_url": "https://github.com/wangzff",
"followers_url": "https://api.github.com/users/wangzff/followers",
"following_url": "https://api.github.com/users/wangzff/following{/other_user}",
"gists_url": "https://api.github.com/users/wangzff/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wangzff/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wangzff/subscriptions",
"organizations_url": "https://api.github.com/users/wangzff/orgs",
"repos_url": "https://api.github.com/users/wangzff/repos",
"events_url": "https://api.github.com/users/wangzff/events{/privacy}",
"received_events_url": "https://api.github.com/users/wangzff/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @wangzff, thanks for raising this issue. \r\n\r\nIf you look at the contents of the checkpoint passed into the script, do you see the model weights? i.e. what is the output of `ls -al mnt/workspace/lawyer-llama/lawyer-llama-13b-beta1.0`?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,692 | 1,692 |
NONE
| null |
### System Info
transformers==4.30.2
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
(base) /mnt/workspace/lawyer-llama/demo> python demo_web.py --port 7863 --checkpoint /mnt/workspace/lawyer-llama/lawyer-llama-13b-beta1.0
Loading model...
Traceback (most recent call last):
File "/mnt/workspace/lawyer-llama/demo/demo_web.py", line 52, in
model = LlamaForCausalLM.from_pretrained(checkpoint, device_map="auto", torch_dtype=torch.float16)
File "/home/pai/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2449, in from_pretrained
raise EnvironmentError(
OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory /mnt/workspace/lawyer-llama/lawyer-llama-13b-beta1.0.
transformers==4.30.2
### Expected behavior
help me!
(base) /mnt/workspace/lawyer-llama/demo> python demo_web.py --port 7863 --checkpoint /mnt/workspace/lawyer-llama/lawyer-llama-13b-beta1.0
Loading model...
Traceback (most recent call last):
File "/mnt/workspace/lawyer-llama/demo/demo_web.py", line 52, in
model = LlamaForCausalLM.from_pretrained(checkpoint, device_map="auto", torch_dtype=torch.float16)
File "/home/pai/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2449, in from_pretrained
raise EnvironmentError(
OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory /mnt/workspace/lawyer-llama/lawyer-llama-13b-beta1.0.
transformers==4.30.2
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24687/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24687/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24686
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24686/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24686/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24686/events
|
https://github.com/huggingface/transformers/pull/24686
| 1,791,128,984 |
PR_kwDOCUB6oc5Uyc8k
| 24,686 |
🌐 [i18n-KO] Updated Korean `serialization.md`
|
{
"login": "wonhyeongseo",
"id": 29195190,
"node_id": "MDQ6VXNlcjI5MTk1MTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/29195190?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wonhyeongseo",
"html_url": "https://github.com/wonhyeongseo",
"followers_url": "https://api.github.com/users/wonhyeongseo/followers",
"following_url": "https://api.github.com/users/wonhyeongseo/following{/other_user}",
"gists_url": "https://api.github.com/users/wonhyeongseo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wonhyeongseo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wonhyeongseo/subscriptions",
"organizations_url": "https://api.github.com/users/wonhyeongseo/orgs",
"repos_url": "https://api.github.com/users/wonhyeongseo/repos",
"events_url": "https://api.github.com/users/wonhyeongseo/events{/privacy}",
"received_events_url": "https://api.github.com/users/wonhyeongseo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"지속적인 번역 수정 작업 매우 멋집니다! 덕분에 ONNX 문서를 꼼꼼히 읽을 수 있었습니다. 수정할 부분은 없어보입니다!",
"@sgugger, @ArthurZucker, @eunseojo May you please review this PR?\r\nThe difference in length is due to an overhaul in the English document. I will try to use the same PR steps as https://github.com/huggingface/transformers/issues/20179#issuecomment-1528191933 for easier review next time.\r\n\r\nThank you so much for your support. I hope you have a great weekend! ❤️ "
] | 1,688 | 1,689 | 1,689 |
CONTRIBUTOR
| null |
<!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.md` to Korean" 으로 부탁드립니다 -->
# What does this PR do?
Updated the `serialization.md` file for the Korean documentation.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
<!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
<!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
=Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
@sgugger, @ArthurZucker, @eunseojo May you please review this PR?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24686/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24686/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24686",
"html_url": "https://github.com/huggingface/transformers/pull/24686",
"diff_url": "https://github.com/huggingface/transformers/pull/24686.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24686.patch",
"merged_at": 1689981840000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24685
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24685/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24685/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24685/events
|
https://github.com/huggingface/transformers/issues/24685
| 1,791,096,828 |
I_kwDOCUB6oc5qwff8
| 24,685 |
How to get the last 4 Hidden states from the feature extraction pipeline
|
{
"login": "Luke-4",
"id": 138615931,
"node_id": "U_kgDOCEMcew",
"avatar_url": "https://avatars.githubusercontent.com/u/138615931?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Luke-4",
"html_url": "https://github.com/Luke-4",
"followers_url": "https://api.github.com/users/Luke-4/followers",
"following_url": "https://api.github.com/users/Luke-4/following{/other_user}",
"gists_url": "https://api.github.com/users/Luke-4/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Luke-4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Luke-4/subscriptions",
"organizations_url": "https://api.github.com/users/Luke-4/orgs",
"repos_url": "https://api.github.com/users/Luke-4/repos",
"events_url": "https://api.github.com/users/Luke-4/events{/privacy}",
"received_events_url": "https://api.github.com/users/Luke-4/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi, could you also provide the data `df2` (or another version of it if privacy is concerned).\r\n\r\nThanks.",
"> Hi, could you also provide the data `df2` (or another version of it if privacy is concerned).\r\n> \r\n> Thanks.\r\n\r\nSure, its just a text and the label\r\n\r\n\r\n\r\n\r\n\r\n",
"@Luke-4 \r\n\r\nNot as an image please. Make it something that can be used to run the code snippet directly 🙏 \r\n",
"> @Luke-4\r\n> \r\n> Not as an image please. Make it something that can be used to run the code snippet directly 🙏\r\n\r\nhere hope this works:\r\nhttps://drive.google.com/drive/folders/186rEP0ZMYc3tjR_EKBYNhYEx9sUtmSTj?usp=sharing\r\n\r\n\r\n\r\n\r\n",
"I haven't look in full details. However, your input to `mean_pooling` (i.e. each element of the output from `p`) seems to have `[batch_dim, seq_len, hideen_dim]`. The `batch_dim` here is just `1`\r\n\r\nWhen you do `last_hidden_states[-4:]` inside `mean_pooling`, it is actually the same element as `last_hidden_states`, as your are taking the last 4 elements along batch dimension (and not the different layers!). When you do `np.mean(..., axis=1)`, it's actually mean along the sequence dimension, and get a shape of `[batch_dim=1, hidden_dim]`.\r\n\r\nThis doesn't corresponds to what you describe that you want to get mean along the last 4 layers.\r\nHowever, I am not sure if the `feature extraction pipeline` allow to get all hidden states (from all layers) - probably yes.\r\n\r\nCould you verify first, please?\r\n\r\n\r\n\r\n```python\r\nr = p([\"I love dog\", \"I love cat too\", \"I love cat that meow meow a lot\"])\r\n\r\nimport numpy as np\r\n\r\nprint(len(a))\r\n\r\nprint(len(a[0]))\r\nprint(len(a[1]))\r\nprint(len(a[2]))\r\n\r\nprint(len(a[0][0]))\r\nprint(len(a[1][0]))\r\nprint(len(a[2][0]))\r\n\r\nprint(np.array(a[0][0]).shape)\r\nprint(np.array(a[1][0]).shape)\r\nprint(np.array(a[2][0]).shape)\r\n```\r\ngives\r\n```bash\r\n3\r\n1\r\n1\r\n1\r\n5\r\n5\r\n6\r\n(5, 1024)\r\n(5, 1024)\r\n(12, 1024)\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,692 | 1,692 |
NONE
| null |
I have defined a pipeline for Feature extraction
```
# Create the pipeline
p = pipeline(
task="feature-extraction",
tokenizer="microsoft/biogpt",
model="microsoft/biogpt",
framework="pt",
device=0
)
bio_gpt = AutoModel.from_pretrained("microsoft/biogpt", output_hidden_states= True)
bio_gpt = bio_gpt.to(device)
```
and I want to extract the embeddings of the last token of the last hidden state, and the Average Pooling of the last 4 layers using the pipeline approach I am doing it like this
_Last token of the last hidden state:_
```
def extract_last_token(last_hidden_states):
last_hidden_states = np.array(last_hidden_states)
return last_hidden_states[:,-1,:]
# Process the data using the pipeline
results = p([row["text"] for _, row in df2.iterrows()])
# Extract the last token of the last hidden state
embeddings = [extract_last_token(hidden_state) for hidden_state in results]
# Create a DataFrame to store the results
df2["embeddings2"] = embeddings
```
_Average pooling of the last 4 layers:_
```
def mean_pooling(last_hidden_states, ):
last_4_layers = last_hidden_states[-4:] # Consider the last 4 layers
return np.mean(last_4_layers, axis=1)
# Process the data using the pipeline
results = p([row["text"] for _, row in df2.iterrows()])
features = np.squeeze(results)
print(features.shape)
# Perform mean pooling on the last hidden states
embeddings = [mean_pooling(hidden_state) for hidden_state in results]
# Create a DataFrame to store the results
df2["embeddings4"] = embeddings
```
The issues are:
1. When I extract the embeddings of the 4 last layers or the 12 last layers the embeddings are always the same

2. The embeddings of the last token of the last hidden state are different from the same embeddings using the "manual" method

Weardly in the above picture the 2 of the embeddings are the same but opposite row ids, this indicates another problem I don't see it if you can spot this I appreciate it.
Here is the code of how I did the manual version
```
output = bio_gpt(**model_inputs)
# Get the last state
last_state = output.last_hidden_state
cls_embeddings = last_state[:, -1, :]
# Print the last state
print(cls_embeddings)
# Assign cls_embeddings to "embeddings4" column in df2
df2["embeddings_manual"] = [cls_embeddings[i].cpu().detach().numpy() for i in range(len(df2))]
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24685/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24685/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24684
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24684/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24684/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24684/events
|
https://github.com/huggingface/transformers/pull/24684
| 1,791,021,544 |
PR_kwDOCUB6oc5UyGAy
| 24,684 |
[`T5`] Adding model_parallel = False to `T5ForQuestionAnswering` and `MT5ForQuestionAnswering`
|
{
"login": "sjrl",
"id": 10526848,
"node_id": "MDQ6VXNlcjEwNTI2ODQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/10526848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sjrl",
"html_url": "https://github.com/sjrl",
"followers_url": "https://api.github.com/users/sjrl/followers",
"following_url": "https://api.github.com/users/sjrl/following{/other_user}",
"gists_url": "https://api.github.com/users/sjrl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sjrl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sjrl/subscriptions",
"organizations_url": "https://api.github.com/users/sjrl/orgs",
"repos_url": "https://api.github.com/users/sjrl/repos",
"events_url": "https://api.github.com/users/sjrl/events{/privacy}",
"received_events_url": "https://api.github.com/users/sjrl/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Is there a good way to add a test for this? I wasn't sure where a test like this would be added. ",
"> Is there a good way to add a test for this? I wasn't sure where a test like this would be added.\r\n\r\nNo need to add a test for this. We have `test_model_parallelization` which tests model parallelization (the opposite).\r\nAs we are dealing with some deprecated thing, it doesn't worth much time on it.\r\n\r\n",
"Thanks! Would be nice if you can check this change works (i.e. fix the issue you opened in #24682) 🙏 ",
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks! Would be nice if you can check this change works\r\n\r\nDefinitely! I just ran the code locally and it works. ",
"Yes. It seems a bit strange indeed. So far all `is_parallelizable` is set at the `XXXPreTrainedModel` level.\r\n\r\nI think it's fine as the only usage of `is_parallelizable` is here\r\n\r\n```python\r\n if hasattr(model, \"is_parallelizable\") and model.is_parallelizable and model.model_parallel:\r\n self.is_model_parallel = True\r\n else:\r\n self.is_model_parallel = False\r\n```\r\nThere shouldn't bee too much confusion between `is_parallelizable = True` and `always model_parallel = False` for just this single (and new) model class."
] | 1,688 | 1,690 | 1,688 |
CONTRIBUTOR
| null |
# What does this PR do?
<!-- Remove if not applicable -->
Fixes https://github.com/huggingface/transformers/issues/24682 by adding `self.model_parallel = False`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker and @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24684/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24684/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24684",
"html_url": "https://github.com/huggingface/transformers/pull/24684",
"diff_url": "https://github.com/huggingface/transformers/pull/24684.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24684.patch",
"merged_at": 1688993408000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24683
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24683/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24683/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24683/events
|
https://github.com/huggingface/transformers/issues/24683
| 1,791,012,742 |
I_kwDOCUB6oc5qwK-G
| 24,683 |
Model checkpoint twice as large when saved with safetensors
|
{
"login": "lenbrocki",
"id": 45183581,
"node_id": "MDQ6VXNlcjQ1MTgzNTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/45183581?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lenbrocki",
"html_url": "https://github.com/lenbrocki",
"followers_url": "https://api.github.com/users/lenbrocki/followers",
"following_url": "https://api.github.com/users/lenbrocki/following{/other_user}",
"gists_url": "https://api.github.com/users/lenbrocki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lenbrocki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lenbrocki/subscriptions",
"organizations_url": "https://api.github.com/users/lenbrocki/orgs",
"repos_url": "https://api.github.com/users/lenbrocki/repos",
"events_url": "https://api.github.com/users/lenbrocki/events{/privacy}",
"received_events_url": "https://api.github.com/users/lenbrocki/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @lenbrocki \r\n\r\nCould you make a self-complete code snippet in `Reproduction` section, please. Thank you.",
"I have updated the Reproduction section",
"Thanks @lenbrocki !\r\n\r\ncc @Narsil ",
"This model is saved in `float16`. `from_pretrained` will by default load it in `float32`.\r\n\r\n`from_pretrained(..., torch_dtype=torch.float16)`\r\n\r\nShould fix it.",
"Yes, that fixed it. Thanks!"
] | 1,688 | 1,688 | 1,688 |
NONE
| null |
### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.15.0-1037-gcp-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("facebook/opt-2.7b")
model.save_pretrained("opt_safetensor", safe_serialization=True)
```
The original pytorch_model.bin is 5.3GB and the new one is sharded:
model-00001-of-00002.safetensors 9.3GB
model-00002-of-00002.safetensors 601MB
### Expected behavior
Is that the expected behaviour? I would expect the weights to have roughly the same size when saved using safetensors.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24683/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24683/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24682
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24682/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24682/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24682/events
|
https://github.com/huggingface/transformers/issues/24682
| 1,790,970,590 |
I_kwDOCUB6oc5qwAre
| 24,682 |
Unable to use Trainer with T5ForQuestionAnswering
|
{
"login": "sjrl",
"id": 10526848,
"node_id": "MDQ6VXNlcjEwNTI2ODQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/10526848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sjrl",
"html_url": "https://github.com/sjrl",
"followers_url": "https://api.github.com/users/sjrl/followers",
"following_url": "https://api.github.com/users/sjrl/following{/other_user}",
"gists_url": "https://api.github.com/users/sjrl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sjrl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sjrl/subscriptions",
"organizations_url": "https://api.github.com/users/sjrl/orgs",
"repos_url": "https://api.github.com/users/sjrl/repos",
"events_url": "https://api.github.com/users/sjrl/events{/privacy}",
"received_events_url": "https://api.github.com/users/sjrl/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @sjrl \r\n\r\nCould you add `model.model_parallel = False` before the line `trainer = Trainer(model=model)`?\r\n\r\n`T5ForQuestionAnswering` is recently added, and it doesn't have `parallelize` or `deparallelize` as other T5 model classes.\r\n\r\nOr you can follow the suggestion below to use `device_map`.\r\n\r\n```\r\n \"`T5ForConditionalGeneration.parallelize` is deprecated and will be removed in v5 of Transformers, you\"\r\n \" should load your model with `device_map='balanced'` in the call to `from_pretrained`. You can also\"\r\n \" provide your own `device_map` but it needs to be a dictionary module_name to device, so for instance\"\r\n \" {'encoder.block.0': 0, 'encoder.block.1': 1, ...}\",\r\n```",
"Hey @ydshieh, thanks for the feedback!\r\n\r\n> T5ForQuestionAnswering is recently added, and it doesn't have parallelize or deparallelize as other T5 model classes.\r\n\r\nHaha yes, maybe I should have specified that I was the one to recently add it. I opted to not add the `parallelize` or `deparallelize` classes since they would eventually be deprecated.\r\n\r\nAnd my apologies, I should have been more clear with my error. I'm not trying to use the `parallelzie` functionality at all, but since `T5ForQuestionAnswering` inherits from `T5PreTrainedModel` it automatically inherits the `is_parallelizable = True` class variable which is causing the error to be thrown. \r\n\r\n> Could you add model.model_parallel = False before the line trainer = Trainer(model=model)?\r\n\r\nDefinitely! I'll try this and get back to you. ",
"Yep, we should probably add `is_parallelizable = False` in the class",
"> Yep, we should probably add is_parallelizable = False in the class\r\n\r\nI can go ahead and open a PR for this. "
] | 1,688 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.31.0.dev0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.17
- Huggingface_hub version: 0.16.2
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
Tagging @sgugger since this is related to the Trainer and @ArthurZucker and @younesbelkada since it is also related to the text model T5ForQuestionAnswering.
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I first ran into this error when running the `examples/pytorch/question-answering/run_qa.py` script, but I am able to reproduce with the following minimal example:
```python
from transformers import AutoModelForQuestionAnswering, Trainer
model = AutoModelForQuestionAnswering.from_pretrained("sjrhuschlee/flan-t5-base-squad2")
trainer = Trainer(model=model)
```
This produces the error
```python
Traceback (most recent call last):
File "/Users/sebastianlee/miniconda3/envs/sjrl_transformers/lib/python3.9/site-packages/IPython/core/interactiveshell.py", line 3508, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-84527b5cd844>", line 3, in <module>
trainer = Trainer(model=model)
File "/Users/sebastianlee/Documents/code/sjrl_transformers/src/transformers/trainer.py", line 373, in __init__
if hasattr(model, "is_parallelizable") and model.is_parallelizable and model.model_parallel:
File "/Users/sebastianlee/miniconda3/envs/sjrl_transformers/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1614, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'T5ForQuestionAnswering' object has no attribute 'model_parallel'
```
I believe this could be fixed by adding
```python
self.model_parallel = False
```
to the init method of T5ForQuestionAnswering. However, this model does not support parallelization so I wonder if it would be better to somehow update the Trainer or possibly remove the `is_parallelizable` attribute from T5ForQuestionAnswering.
### Expected behavior
For the Trainer to work with T5ForQuestionAnswering.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24682/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24682/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24681
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24681/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24681/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24681/events
|
https://github.com/huggingface/transformers/pull/24681
| 1,790,906,888 |
PR_kwDOCUB6oc5Uxtjk
| 24,681 |
LlamaTokenizer should be picklable
|
{
"login": "icyblade",
"id": 3407450,
"node_id": "MDQ6VXNlcjM0MDc0NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3407450?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/icyblade",
"html_url": "https://github.com/icyblade",
"followers_url": "https://api.github.com/users/icyblade/followers",
"following_url": "https://api.github.com/users/icyblade/following{/other_user}",
"gists_url": "https://api.github.com/users/icyblade/gists{/gist_id}",
"starred_url": "https://api.github.com/users/icyblade/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/icyblade/subscriptions",
"organizations_url": "https://api.github.com/users/icyblade/orgs",
"repos_url": "https://api.github.com/users/icyblade/repos",
"events_url": "https://api.github.com/users/icyblade/events{/privacy}",
"received_events_url": "https://api.github.com/users/icyblade/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,688 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes `LlamaTokenizer` not picklable and will cause `OSError` when tokenizing with Spark UDF.
Reference: #13577
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
@ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24681/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24681/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24681",
"html_url": "https://github.com/huggingface/transformers/pull/24681",
"diff_url": "https://github.com/huggingface/transformers/pull/24681.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24681.patch",
"merged_at": 1688635288000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24680
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24680/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24680/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24680/events
|
https://github.com/huggingface/transformers/pull/24680
| 1,790,900,039 |
PR_kwDOCUB6oc5UxsHe
| 24,680 |
Add dropouts to GPT-NeoX
|
{
"login": "ZHAOTING",
"id": 5592709,
"node_id": "MDQ6VXNlcjU1OTI3MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5592709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZHAOTING",
"html_url": "https://github.com/ZHAOTING",
"followers_url": "https://api.github.com/users/ZHAOTING/followers",
"following_url": "https://api.github.com/users/ZHAOTING/following{/other_user}",
"gists_url": "https://api.github.com/users/ZHAOTING/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZHAOTING/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZHAOTING/subscriptions",
"organizations_url": "https://api.github.com/users/ZHAOTING/orgs",
"repos_url": "https://api.github.com/users/ZHAOTING/repos",
"events_url": "https://api.github.com/users/ZHAOTING/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZHAOTING/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Glad to make my first PR to transformers!"
] | 1,688 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
# What does this PR do?
The current GPT-NeoX modeling code does not contain dropouts as in [the original EleutherAI/gpt-neox code](https://github.com/EleutherAI/gpt-neox/blob/main/megatron/model), possibly because that GPT-NeoX 20B, where the HF gpt-neox implementation was applied for the first time, has disabled all dropouts.
However, EleutherAI/gpt-neox does provide dropouts at several places.
* post-word-embedding dropout, [reference](https://github.com/EleutherAI/gpt-neox/blob/2534e3d76e320aba095894e7dc2a4b416a1ac8df/megatron/model/word_embeddings.py#L156),
* attention score dropout, [reference](https://github.com/EleutherAI/gpt-neox/blob/2534e3d76e320aba095894e7dc2a4b416a1ac8df/megatron/model/transformer.py#L453C1-L453C1),
* post-attention dropout, [reference1](https://github.com/EleutherAI/gpt-neox/blob/2534e3d76e320aba095894e7dc2a4b416a1ac8df/megatron/model/transformer.py#L829-L834), [reference2](https://github.com/EleutherAI/gpt-neox/blob/2534e3d76e320aba095894e7dc2a4b416a1ac8df/megatron/model/transformer.py#L865-L870), [reference3](https://github.com/EleutherAI/gpt-neox/blob/2534e3d76e320aba095894e7dc2a4b416a1ac8df/megatron/model/transformer.py#L873-L880),
* post-mlp dropout, [reference1](https://github.com/EleutherAI/gpt-neox/blob/2534e3d76e320aba095894e7dc2a4b416a1ac8df/megatron/model/transformer.py#L839-L844), [reference2](https://github.com/EleutherAI/gpt-neox/blob/2534e3d76e320aba095894e7dc2a4b416a1ac8df/megatron/model/transformer.py#L893-L898).
These dropouts can be turned on and help produce better fine-tuning performance.
This PR adds corresponding dropouts to the HF gpt_neox implementation. Following the original EleutherAI code, dropout probabilities are controlled by two config arguments:
* attention_dropout, which controls the probability of the attention score dropout, [reference](https://github.com/EleutherAI/gpt-neox/blob/2534e3d76e320aba095894e7dc2a4b416a1ac8df/megatron/neox_arguments/neox_args.py#L911),
* hidden_dropout, which controls the probability of remaining dropouts, [reference](https://github.com/EleutherAI/gpt-neox/blob/2534e3d76e320aba095894e7dc2a4b416a1ac8df/megatron/neox_arguments/neox_args.py#L916).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Sorry that I am not sure whom to tag, so I am following the suggestion to tag text model people @ArthurZucker and @younesbelkada.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24680/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24680/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24680",
"html_url": "https://github.com/huggingface/transformers/pull/24680",
"diff_url": "https://github.com/huggingface/transformers/pull/24680.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24680.patch",
"merged_at": 1688635597000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24679
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24679/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24679/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24679/events
|
https://github.com/huggingface/transformers/issues/24679
| 1,790,868,694 |
I_kwDOCUB6oc5qvnzW
| 24,679 |
Custom vision encoder-decoder problem
|
{
"login": "kyle-bong",
"id": 42907231,
"node_id": "MDQ6VXNlcjQyOTA3MjMx",
"avatar_url": "https://avatars.githubusercontent.com/u/42907231?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kyle-bong",
"html_url": "https://github.com/kyle-bong",
"followers_url": "https://api.github.com/users/kyle-bong/followers",
"following_url": "https://api.github.com/users/kyle-bong/following{/other_user}",
"gists_url": "https://api.github.com/users/kyle-bong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kyle-bong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kyle-bong/subscriptions",
"organizations_url": "https://api.github.com/users/kyle-bong/orgs",
"repos_url": "https://api.github.com/users/kyle-bong/repos",
"events_url": "https://api.github.com/users/kyle-bong/events{/privacy}",
"received_events_url": "https://api.github.com/users/kyle-bong/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] |
[
"Hi @kyle-bong\r\n\r\nThe `transformers` GitHub pages are reserved for issues or feature requests. The question here is not in the scope, and [[Hugging Face Forums](https://discuss.huggingface.co/)](https://discuss.huggingface.co/) is a better place.\r\n\r\n--------------------------------------------------------------------------------------------\r\n\r\nHowever. The decoder model in `transformers` inherit from `PreTrainedModel` which itself is a subclass of `class GenerationMixin` that's where `generate` being defined.\r\n\r\nYou can probably do `class CustomEncoderDecoderModel(PreTrainedModel):`, but there might something more to make it work."
] | 1,688 | 1,688 | 1,688 |
NONE
| null |
### Model description
I'm trying to make a custom vision encoder-decoder model.
I want to use pre-trained encoder but use decoder from scratch, So I cannot use `VisionEncoderDecoderModel.from_pretrained()`.
Specifically, I want to use pre-trained `deit` model as a encoder, and custom trained `Electra` as a decoder.
I write code like below. In train step, there is no problem.
But I got a problem which says "model have no attribute 'generate'". How can I implement or import `generate` function?
```
class CustomEncoderDecoderModel(nn.Module):
config_class = VisionEncoderDecoderConfig
def __init__(self, encoder_name, decoder_config,
config=None):
super(CustomEncoderDecoderModel, self).__init__()
self.encoder = AutoModel.from_pretrained(encoder_name)
self.decoder_config = decoder_config
self.decoder = AutoModelForCausalLM.from_config(self.decoder_config)
self.config = VisionEncoderDecoderConfig.from_encoder_decoder_configs(self.encoder.config, self.decoder.config)
self.criterion = nn.CrossEntropyLoss()
self.enc_to_dec_proj = nn.Linear(self.encoder.config.hidden_size, self.decoder.config.hidden_size)
def forward(self, pixel_values, labels, decoder_input_ids=None,
decoder_input_embeds=None,
decoder_attention_mask=None,
decoder_inputs_embeds=None,
past_key_values=None):
encoder_outputs = self.encoder(pixel_values,
output_attentions=True)
encoder_hidden_states = encoder_outputs[0]
encoder_attention_mask = None
if decoder_input_ids is None and decoder_input_embeds is None:
decoder_input_ids = shift_tokens_right(
labels, self.decoder.config.pad_token_id, decoder_start_token_id=2
)
if self.encoder.config.hidden_size != self.decoder.config.hidden_size:
encoder_hidden_states = self.enc_to_dec_proj(encoder_hidden_states)
decoder_outputs = self.decoder(
input_ids = decoder_input_ids,
attention_mask = decoder_attention_mask,
encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=encoder_attention_mask,
inputs_embeds=decoder_inputs_embeds,
output_attentions=True,
use_cache=True,
past_key_values=past_key_values,
)
logits = decoder_outputs[0]
loss = self.criterion(logits.reshape(-1, self.decoder.config.vocab_size), labels.reshape(-1))
return {'loss': loss, 'logits': logits,
'past_key_values': decoder_outputs.past_key_values,
'decoder_hidden_states': decoder_outputs.hidden_states,
'decoder_attentions': decoder_outputs.attentions,
'cross_attentions': decoder_outputs.cross_attentions,
'encoder_hidden_state': encoder_outputs.hidden_states,
'encoder_attentions': encoder_attention_mask,
'encoder_attentions': encoder_outputs.attentions,
}
```
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
_No response_
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24679/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24679/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24678
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24678/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24678/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24678/events
|
https://github.com/huggingface/transformers/pull/24678
| 1,790,606,576 |
PR_kwDOCUB6oc5UwsOr
| 24,678 |
[`MT5`] Fix CONFIG_MAPPING issue leading it to load umt5 class
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Can confirm that: \r\n```python \r\nfrom transformers import AutoModelForSeq2SeqLM, AutoTokenizer\r\n\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained('google/mt5-small')\r\nprint(type(model))\r\n<class 'transformers.models.mt5.modeling_mt5.MT5ForConditionalGeneration'>\r\n```\r\nis back to normal 😄 "
] | 1,688 | 1,688 | 1,688 |
COLLABORATOR
| null |
# What does this PR do?
Adresses #24662 and one of our CI test.
The issue stems from the `CONFIG_MAPPING`'s values used as keys to index over the auto mapping.
There were two ways to fix this, either change our logic or just add a config.
For simplicity added a config.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24678/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24678/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24678",
"html_url": "https://github.com/huggingface/transformers/pull/24678",
"diff_url": "https://github.com/huggingface/transformers/pull/24678.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24678.patch",
"merged_at": 1688697235000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24677
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24677/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24677/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24677/events
|
https://github.com/huggingface/transformers/issues/24677
| 1,790,497,365 |
I_kwDOCUB6oc5quNJV
| 24,677 |
Gradient clipping is no longer recommended?
|
{
"login": "cloudygoose",
"id": 1544039,
"node_id": "MDQ6VXNlcjE1NDQwMzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1544039?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cloudygoose",
"html_url": "https://github.com/cloudygoose",
"followers_url": "https://api.github.com/users/cloudygoose/followers",
"following_url": "https://api.github.com/users/cloudygoose/following{/other_user}",
"gists_url": "https://api.github.com/users/cloudygoose/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cloudygoose/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cloudygoose/subscriptions",
"organizations_url": "https://api.github.com/users/cloudygoose/orgs",
"repos_url": "https://api.github.com/users/cloudygoose/repos",
"events_url": "https://api.github.com/users/cloudygoose/events{/privacy}",
"received_events_url": "https://api.github.com/users/cloudygoose/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"You can definitely experiment with gradient clipping.\r\n\r\nThe `transformers` GitHub pages are reserved for issues or feature requests. The question here is not in the scope, and [Hugging Face Forums](https://discuss.huggingface.co/) is a better place, if you have further question on this topic."
] | 1,688 | 1,688 | 1,688 |
NONE
| null |
### System Info
Hi,
I just found that in the current examples (e.g., https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm_no_trainer.py), gradient clipping is no longer applied. Is there any particular reason? Is it okay if we add a line to do gradient clipping myself?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
N/A
### Expected behavior
N/A
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24677/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24677/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24676
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24676/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24676/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24676/events
|
https://github.com/huggingface/transformers/issues/24676
| 1,790,480,925 |
I_kwDOCUB6oc5quJId
| 24,676 |
TrainingArguments not working in transformers v 4.30
|
{
"login": "VarshithaCVasireddy",
"id": 96924488,
"node_id": "U_kgDOBcbzSA",
"avatar_url": "https://avatars.githubusercontent.com/u/96924488?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VarshithaCVasireddy",
"html_url": "https://github.com/VarshithaCVasireddy",
"followers_url": "https://api.github.com/users/VarshithaCVasireddy/followers",
"following_url": "https://api.github.com/users/VarshithaCVasireddy/following{/other_user}",
"gists_url": "https://api.github.com/users/VarshithaCVasireddy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VarshithaCVasireddy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VarshithaCVasireddy/subscriptions",
"organizations_url": "https://api.github.com/users/VarshithaCVasireddy/orgs",
"repos_url": "https://api.github.com/users/VarshithaCVasireddy/repos",
"events_url": "https://api.github.com/users/VarshithaCVasireddy/events{/privacy}",
"received_events_url": "https://api.github.com/users/VarshithaCVasireddy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"From the discussion forum \"https://discuss.huggingface.co/t/trainingargument-does-not-work-on-colab/43372\" got the solution to use Transformers version 4.17 to make TrainingArguments work. Wanted to know why TrainingArguments not working in version 4.30?",
"After you `pip install accelerate -U`, did you restart the notebook? (It seems you are using colab notebook?)",
"Hi @ydshieh yes thanks that worked, yesterday I tried the same but not sure what changed but today it worked. Thanks for the comment. Yes, it is in colab notebook."
] | 1,688 | 1,688 | 1,688 |
NONE
| null |
### System Info
Hi @sgugger
I was trying to implement the same code that is present in the tutorial "https://github.com/huggingface/notebooks/blob/main/examples/language_modeling.ipynb" , but when executing the TrainingArguments function I am getting the error "ImportError: Using the `Trainer` with `PyTorch` requires `accelerate>=0.20.1`: Please run `pip install transformers[torch]` or `pip install accelerate -U`" even after installing what was suggested still facing the same problem. My previous code which worked well 1 month ago also not working exactly at the **TrainingArguments** function.
Attaching the image below
<img width="1346" alt="image" src="https://github.com/huggingface/transformers/assets/96924488/01819fec-1bae-4d45-bea7-fa0ab60d63db">
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
training_args = TrainingArguments(
f"{model_checkpoint}-wikitext2",
evaluation_strategy = "epoch",
learning_rate=2e-5,
weight_decay=0.01,
push_to_hub=True
)
From Casual language modeling task
### Expected behavior
TrainingArguments should be working with no error.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24676/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24676/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24675
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24675/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24675/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24675/events
|
https://github.com/huggingface/transformers/pull/24675
| 1,790,355,846 |
PR_kwDOCUB6oc5Uv0iZ
| 24,675 |
Bump grpcio from 1.44.0 to 1.53.0 in /examples/research_projects/decision_transformer
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting `@dependabot ignore this major version` or `@dependabot ignore this minor version`.\n\nIf you change your mind, just re-open this PR and I'll resolve any conflicts on it."
] | 1,688 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
Bumps [grpcio](https://github.com/grpc/grpc) from 1.44.0 to 1.53.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/grpc/grpc/releases">grpcio's releases</a>.</em></p>
<blockquote>
<h2>Release v1.53.0</h2>
<p>This is release 1.53.0 (<a href="https://github.com/grpc/grpc/blob/master/doc/g_stands_for.md">glockenspiel</a>) of gRPC Core.</p>
<p>For gRPC documentation, see <a href="https://grpc.io/">grpc.io</a>. For previous releases, see <a href="https://github.com/grpc/grpc/releases">Releases</a>.</p>
<p>This release contains refinements, improvements, and bug fixes, with highlights listed below.</p>
<h2>Core</h2>
<ul>
<li>xDS: fix crash when removing the last endpoint from the last locality in weighted_target. (<a href="https://redirect.github.com/grpc/grpc/pull/32592">#32592</a>)</li>
<li>filter stack: pass peer name up via recv_initial_metadata batch. (<a href="https://redirect.github.com/grpc/grpc/pull/31933">#31933</a>)</li>
<li>[EventEngine] Add advice against blocking work in callbacks. (<a href="https://redirect.github.com/grpc/grpc/pull/32397">#32397</a>)</li>
<li>[http2] Dont drop connections on metadata limit exceeded. (<a href="https://redirect.github.com/grpc/grpc/pull/32309">#32309</a>)</li>
<li>xDS: reject aggregate cluster with empty cluster list. (<a href="https://redirect.github.com/grpc/grpc/pull/32238">#32238</a>)</li>
<li>Fix Python epoll1 Fork Support. (<a href="https://redirect.github.com/grpc/grpc/pull/32196">#32196</a>)</li>
<li>server: introduce ServerMetricRecorder API and move per-call reporting from a C++ interceptor to a C-core filter. (<a href="https://redirect.github.com/grpc/grpc/pull/32106">#32106</a>)</li>
<li>[EventEngine] Add invalid handle types to the public API. (<a href="https://redirect.github.com/grpc/grpc/pull/32202">#32202</a>)</li>
<li>[EventEngine] Refactoring the EventEngine Test Suite: Part 1. (<a href="https://redirect.github.com/grpc/grpc/pull/32127">#32127</a>)</li>
<li>xDS: fix WeightedClusters total weight handling. (<a href="https://redirect.github.com/grpc/grpc/pull/32134">#32134</a>)</li>
</ul>
<h2>C++</h2>
<ul>
<li>Update minimum MSVC version to 2019. (<a href="https://redirect.github.com/grpc/grpc/pull/32615">#32615</a>)</li>
<li>Use CMake variables for paths in pkg-config files. (<a href="https://redirect.github.com/grpc/grpc/pull/31671">#31671</a>)</li>
</ul>
<h2>C#</h2>
<ul>
<li>Grpc.Tools: Use x86 protoc binaries on arm64 Windows. (<a href="https://redirect.github.com/grpc/grpc/pull/32017">#32017</a>)</li>
</ul>
<h2>Python</h2>
<ul>
<li>Support python 3.11 on aarch64. (<a href="https://redirect.github.com/grpc/grpc/pull/32270">#32270</a>)</li>
<li>Include .pyi file. (<a href="https://redirect.github.com/grpc/grpc/pull/32268">#32268</a>)</li>
<li>De-experimentalize wait-for-ready. (<a href="https://redirect.github.com/grpc/grpc/pull/32143">#32143</a>)</li>
<li>De-experimentalize compression. (<a href="https://redirect.github.com/grpc/grpc/pull/32138">#32138</a>)</li>
</ul>
<h2>Ruby</h2>
<ul>
<li>[ruby]: add pre-compiled binaries for ruby 3.2; drop them for ruby 2.6. (<a href="https://redirect.github.com/grpc/grpc/pull/32089">#32089</a>)</li>
</ul>
<h2>Release v1.53.0-pre2</h2>
<p>This is a prerelease of gRPC Core 1.53.0 (glockenspiel).</p>
<p>For gRPC documentation, see <a href="https://grpc.io/">grpc.io</a>. For previous releases, see <a href="https://github.com/grpc/grpc/releases">Releases</a>.</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/grpc/grpc/blob/master/doc/grpc_release_schedule.md">grpcio's changelog</a>.</em></p>
<blockquote>
<h1>gRPC Release Schedule</h1>
<p>Below is the release schedule for gRPC <a href="https://github.com/grpc/grpc-java/releases">Java</a>, <a href="https://github.com/grpc/grpc-go/releases">Go</a> and <a href="https://github.com/grpc/grpc/releases">Core</a> and its dependent languages C++, C#, Objective-C, PHP, Python and Ruby.</p>
<p>Releases are scheduled every six weeks on Tuesdays on a best effort basis. In some unavoidable situations a release may be delayed or released early or a language may skip a release altogether and do the next release to catch up with other languages. See the past releases in the links above. A six-week cycle gives us a good balance between delivering new features/fixes quickly and keeping the release overhead low.</p>
<p>The gRPC release support policy can be found <a href="https://grpc.io/docs/what-is-grpc/faq/#how-long-are-grpc-releases-supported-for">here</a>.</p>
<p>Releases are cut from release branches. For Core and Java repos, the release branch is cut two weeks before the scheduled release date. For Go, the branch is cut just before the release. An RC (release candidate) is published for Core and its dependent languages just after the branch cut. This RC is later promoted to release version if no further changes are made to the release branch. We do our best to keep head of master branch stable at all times regardless of release schedule. Daily build packages from master branch for C#, PHP, Python, Ruby and Protoc plugins are published on <a href="https://packages.grpc.io/">packages.grpc.io</a>. If you depend on gRPC in production we recommend to set up your CI system to test the RCs and, if possible, the daily builds.</p>
<p>Names of gRPC releases are <a href="https://github.com/grpc/grpc/blob/master/doc/g_stands_for.md">here</a>.</p>
<table>
<thead>
<tr>
<th>Release</th>
<th>Scheduled Branch Cut</th>
<th>Scheduled Release Date</th>
</tr>
</thead>
<tbody>
<tr>
<td>v1.17.0</td>
<td>Nov 19, 2018</td>
<td>Dec 4, 2018</td>
</tr>
<tr>
<td>v1.18.0</td>
<td>Jan 2, 2019</td>
<td>Jan 15, 2019</td>
</tr>
<tr>
<td>v1.19.0</td>
<td>Feb 12, 2019</td>
<td>Feb 26, 2019</td>
</tr>
<tr>
<td>v1.20.0</td>
<td>Mar 26, 2019</td>
<td>Apr 9, 2019</td>
</tr>
<tr>
<td>v1.21.0</td>
<td>May 7, 2019</td>
<td>May 21, 2019</td>
</tr>
<tr>
<td>v1.22.0</td>
<td>Jun 18, 2019</td>
<td>Jul 2, 2019</td>
</tr>
<tr>
<td>v1.23.0</td>
<td>Jul 30, 2019</td>
<td>Aug 13, 2019</td>
</tr>
<tr>
<td>v1.24.0</td>
<td>Sept 10, 2019</td>
<td>Sept 24, 2019</td>
</tr>
<tr>
<td>v1.25.0</td>
<td>Oct 22, 2019</td>
<td>Nov 5, 2019</td>
</tr>
<tr>
<td>v1.26.0</td>
<td>Dec 3, 2019</td>
<td>Dec 17, 2019</td>
</tr>
<tr>
<td>v1.27.0</td>
<td>Jan 14, 2020</td>
<td>Jan 28, 2020</td>
</tr>
<tr>
<td>v1.28.0</td>
<td>Feb 25, 2020</td>
<td>Mar 10, 2020</td>
</tr>
<tr>
<td>v1.29.0</td>
<td>Apr 7, 2020</td>
<td>Apr 21, 2020</td>
</tr>
<tr>
<td>v1.30.0</td>
<td>May 19, 2020</td>
<td>Jun 2, 2020</td>
</tr>
<tr>
<td>v1.31.0</td>
<td>Jul 14, 2020</td>
<td>Jul 28, 2020</td>
</tr>
<tr>
<td>v1.32.0</td>
<td>Aug 25, 2020</td>
<td>Sep 8, 2020</td>
</tr>
<tr>
<td>v1.33.0</td>
<td>Oct 6, 2020</td>
<td>Oct 20, 2020</td>
</tr>
<tr>
<td>v1.34.0</td>
<td>Nov 17, 2020</td>
<td>Dec 1, 2020</td>
</tr>
<tr>
<td>v1.35.0</td>
<td>Dec 29, 2020</td>
<td>Jan 12, 2021</td>
</tr>
<tr>
<td>v1.36.0</td>
<td>Feb 9, 2021</td>
<td>Feb 23, 2021</td>
</tr>
<tr>
<td>v1.37.0</td>
<td>Mar 23, 2021</td>
<td>Apr 6, 2021</td>
</tr>
<tr>
<td>v1.38.0</td>
<td>May 4, 2021</td>
<td>May 18, 2021</td>
</tr>
<tr>
<td>v1.39.0</td>
<td>Jun 15, 2021</td>
<td>Jun 29, 2021</td>
</tr>
<tr>
<td>v1.40.0</td>
<td>Jul 27, 2021</td>
<td>Aug 10, 2021</td>
</tr>
<tr>
<td>v1.41.0</td>
<td>Sep 7, 2021</td>
<td>Sep 21, 2021</td>
</tr>
<tr>
<td>v1.42.0</td>
<td>Oct 19, 2021</td>
<td>Nov 2, 2021</td>
</tr>
<tr>
<td>v1.43.0</td>
<td>Nov 30, 2021</td>
<td>Dec 14, 2021</td>
</tr>
</tbody>
</table>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/grpc/grpc/commit/358bfb581feeda5bf17dd3b96da1074d84a6ef8d"><code>358bfb5</code></a> Bump version to 1.53.0 (<a href="https://redirect.github.com/grpc/grpc/issues/32685">#32685</a>)</li>
<li><a href="https://github.com/grpc/grpc/commit/6e1ebe76d87a2e9b643c08b3e234d374edcd9e92"><code>6e1ebe7</code></a> Backport: Ensure compatibility with the new custom kokoro win2019 image (<a href="https://redirect.github.com/grpc/grpc/issues/326">#326</a>...</li>
<li><a href="https://github.com/grpc/grpc/commit/44a77f6e911b95e1bc2c909b348123b2da2c4375"><code>44a77f6</code></a> Backport 1.53: Update minimum MSVC version to 2019 (<a href="https://redirect.github.com/grpc/grpc/issues/32615">#32615</a>)</li>
<li><a href="https://github.com/grpc/grpc/commit/c11153cb4ef01ca5f83304b2e28edd0182b3c0d0"><code>c11153c</code></a> backport to 1.53: xDS: fix crash when removing the last endpoint from the las...</li>
<li><a href="https://github.com/grpc/grpc/commit/7c7712a6b08ebf1bdc18fc43dc871b47b3dffe97"><code>7c7712a</code></a> Bump version to 1.53.0-pre2. (<a href="https://redirect.github.com/grpc/grpc/issues/32545">#32545</a>)</li>
<li><a href="https://github.com/grpc/grpc/commit/a4017dc45e342064722a36181ed14e6d7b469d29"><code>a4017dc</code></a> backport to 1.53: [promises] Make Poll<T> its own type, not a variant<> (<a href="https://redirect.github.com/grpc/grpc/issues/32540">#32540</a>)</li>
<li><a href="https://github.com/grpc/grpc/commit/3f93c1667280e6f11a1eb35cccfb8c81c698bee5"><code>3f93c16</code></a> Fuzzer fix backport to v1.53 (<a href="https://redirect.github.com/grpc/grpc/issues/32511">#32511</a>)</li>
<li><a href="https://github.com/grpc/grpc/commit/5b244b25c2b87a85781ceeecd34ce0f8e8e7e840"><code>5b244b2</code></a> Bump release version to 1.53.0-pre1 (<a href="https://redirect.github.com/grpc/grpc/issues/32428">#32428</a>)</li>
<li><a href="https://github.com/grpc/grpc/commit/6589340efc39b87c94897d221eaf949213cdac87"><code>6589340</code></a> Bump core version 202302161703 (<a href="https://redirect.github.com/grpc/grpc/issues/32416">#32416</a>)</li>
<li><a href="https://github.com/grpc/grpc/commit/d49e1513063e6624e08eb6f59049596178a28783"><code>d49e151</code></a> [backoff] Add random early detection classifier (<a href="https://redirect.github.com/grpc/grpc/issues/32354">#32354</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/grpc/grpc/compare/v1.44.0...v1.53.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24675/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24675/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24675",
"html_url": "https://github.com/huggingface/transformers/pull/24675",
"diff_url": "https://github.com/huggingface/transformers/pull/24675.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24675.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24674
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24674/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24674/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24674/events
|
https://github.com/huggingface/transformers/pull/24674
| 1,790,351,393 |
PR_kwDOCUB6oc5Uvzh3
| 24,674 |
Fix non-deterministic Megatron-LM checkpoint name
|
{
"login": "janEbert",
"id": 12694897,
"node_id": "MDQ6VXNlcjEyNjk0ODk3",
"avatar_url": "https://avatars.githubusercontent.com/u/12694897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/janEbert",
"html_url": "https://github.com/janEbert",
"followers_url": "https://api.github.com/users/janEbert/followers",
"following_url": "https://api.github.com/users/janEbert/following{/other_user}",
"gists_url": "https://api.github.com/users/janEbert/gists{/gist_id}",
"starred_url": "https://api.github.com/users/janEbert/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/janEbert/subscriptions",
"organizations_url": "https://api.github.com/users/janEbert/orgs",
"repos_url": "https://api.github.com/users/janEbert/repos",
"events_url": "https://api.github.com/users/janEbert/events{/privacy}",
"received_events_url": "https://api.github.com/users/janEbert/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the review! Actually that's only the case if `--use-distributed-optimizer` is not given! Otherwise an extra file called `distrib_optim.pt` is created on the most recent Megatron-LM commit. :)",
">Otherwise an extra file called distrib_optim.pt is created on the most recent Megatron-LM commit. :)\r\n\r\nCool, thank you for the info!"
] | 1,688 | 1,689 | 1,689 |
CONTRIBUTOR
| null |
# What does this PR do?
`os.listdir`'s order is not deterministic, which is a problem when querying the first listed file as in the code (`os.listdir(...)[0]`).
This can return a checkpoint name such as `distrib_optim.pt`, which does not include desired information such as the saved arguments originally given to Megatron-LM.
Instead, we try out different file names used by Megatron-LM (`model_rng.pt` was mentioned in other parts of the script; I'm assuming this is for backward-compatibility).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@pacman100 wrote most of the code in there and made a Twitter post about this functionality, hope you're the right person to tag. :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24674/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24674/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24674",
"html_url": "https://github.com/huggingface/transformers/pull/24674",
"diff_url": "https://github.com/huggingface/transformers/pull/24674.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24674.patch",
"merged_at": 1689101705000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24673
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24673/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24673/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24673/events
|
https://github.com/huggingface/transformers/issues/24673
| 1,790,152,001 |
I_kwDOCUB6oc5qs41B
| 24,673 |
Language Modeling on Already Tokenized Data
|
{
"login": "jorisperrenet",
"id": 76250525,
"node_id": "MDQ6VXNlcjc2MjUwNTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/76250525?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jorisperrenet",
"html_url": "https://github.com/jorisperrenet",
"followers_url": "https://api.github.com/users/jorisperrenet/followers",
"following_url": "https://api.github.com/users/jorisperrenet/following{/other_user}",
"gists_url": "https://api.github.com/users/jorisperrenet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jorisperrenet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jorisperrenet/subscriptions",
"organizations_url": "https://api.github.com/users/jorisperrenet/orgs",
"repos_url": "https://api.github.com/users/jorisperrenet/repos",
"events_url": "https://api.github.com/users/jorisperrenet/events{/privacy}",
"received_events_url": "https://api.github.com/users/jorisperrenet/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The example scripts serve as examples 🤗 . If you need some custom modification(s), go for it.\r\n\r\nIn your case, you can probably skip the following block (and other similar places). You might need to assign your own tokenized dataset(s) to variables like `tokenized_datasets` however.\r\n\r\nhttps://github.com/huggingface/transformers/blob/9a5d468ba0562e2d5edf9da787881fa227132bca/examples/pytorch/language-modeling/run_clm.py#L456-L470"
] | 1,688 | 1,688 | 1,688 |
NONE
| null |
### System Info
When I try to execute [`run_clm.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py) in the language modeling example I naturally get the question to specify the `tokenizer_name`.
Yet, my data is already tokenized, i.e. my train and validation look (very crudely) like
```
0 3111 5100 2100 3100 6000
1000 4067 3031 3068 5141 3073
1000 3067 6031 3068 5141 3076
```
Thus, sequences on separate lines.
My question is: is there some kind of workaround such that I can train a model on this already tokenized data?
Any help would be great!
### Who can help?
@ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Inside `transformers/examples/pytorch/language-modeling` create the folder `output` and the file `train.txt` (with some numbers in it, see above).
```
python run_clm.py --model_type gpt2 --output_dir output --do_train --train_file train.txt
```
It returns
```
ValueError: You are instantiating a new tokenizer from scratch. This is not
supported by this script.You can do it from another script, save it, and load
it from here, using --tokenizer_name.
```
### Expected behavior
I would expect/prefer that the output would be a warning specifying that a "passthrough" tokenizer will be used.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24673/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24673/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24672
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24672/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24672/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24672/events
|
https://github.com/huggingface/transformers/pull/24672
| 1,790,035,849 |
PR_kwDOCUB6oc5Uutdr
| 24,672 |
Remove WWT from README
|
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,688 | 1,689 | 1,689 |
MEMBER
| null |
Removes the line that presents Write With Transformer as the official demo for text generation as this hasn't been the case for a while.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24672/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24672/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24672",
"html_url": "https://github.com/huggingface/transformers/pull/24672",
"diff_url": "https://github.com/huggingface/transformers/pull/24672.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24672.patch",
"merged_at": 1689173888000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24671
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24671/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24671/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24671/events
|
https://github.com/huggingface/transformers/issues/24671
| 1,789,989,816 |
I_kwDOCUB6oc5qsRO4
| 24,671 |
Is there any plan to add kosmos-2 to the transformers.
|
{
"login": "BIGBALLON",
"id": 7837172,
"node_id": "MDQ6VXNlcjc4MzcxNzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7837172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BIGBALLON",
"html_url": "https://github.com/BIGBALLON",
"followers_url": "https://api.github.com/users/BIGBALLON/followers",
"following_url": "https://api.github.com/users/BIGBALLON/following{/other_user}",
"gists_url": "https://api.github.com/users/BIGBALLON/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BIGBALLON/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BIGBALLON/subscriptions",
"organizations_url": "https://api.github.com/users/BIGBALLON/orgs",
"repos_url": "https://api.github.com/users/BIGBALLON/repos",
"events_url": "https://api.github.com/users/BIGBALLON/events{/privacy}",
"received_events_url": "https://api.github.com/users/BIGBALLON/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false |
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Thank you for mentioning this :-). There is some early discussion within the team. I will come back to you once we have some decision.",
"This is tracked in PR #24709. (so far empty, but I will try to 🚀 )",
"@ydshieh I'm very excited to hear this news. I sincerely appreciate your efforts.",
"any updates?",
"Still on it (slowly) 🤗 ",
"Sure. Thank you. Appreciate those efforts.\r\n",
"I just want to say a big thank you for your effort @ydshieh! Looking forward to it.",
"@Rajmehta123 @yolandalalala @vanpelt \r\n\r\nThis [project](https://github.com/BIGBALLON/kosmos-2-gd) can be provided for everyone to try, I hope it can help everyone",
"Very nice! @BIGBALLON Thanks a lot!",
"@ydshieh Thank you again for your great contribution!",
"Amazing! @BIGBALLON Thanks a lot!",
"Just want to give a update: I am almost done the coding - just need to put everything together to finalize.\r\n\r\n(The model might ends up as a custom code on the Hub instead of directly available in `transformers` - I am not sure)",
"Hi, @ydshieh , is there any update? 😄 \r\n\r\n> Just want to give a update: I am almost done the coding - just need to put everything together to finalize.\r\n> \r\n> (The model might ends up as a custom code on the Hub instead of directly available in `transformers` - I am not sure)\r\n\r\n",
"Hi @BIGBALLON \r\n\r\nSorry for the delay due to some internal task! I will make it available on the Hub this week 🔥 .\r\n(And we will see if it would be directly in `transformers` later).\r\n",
"Hi @BIGBALLON @yolandalalala @Rajmehta123 \r\n\r\nAs promised, I put the code on this HuggingFace Hub repository [ydshieh/kosmos-2-patch14-224](https://huggingface.co/ydshieh/kosmos-2-patch14-224). You can use it like the code snippet at the end. It will give something like (when specifying `cleanup_and_extract=True` to `post_processor_generation`\r\n\r\n> <grounding> An image of<phrase> a snowman</phrase><object><patch_index_0044><patch_index_0863></object> warming himself by<phrase> a fire</phrase><object><patch_index_0005><patch_index_0911></object>.\r\n\r\nThis means:\r\n> A text description: An image of a snowman warming himself by a fire.\r\n\r\nand 2 objects\r\n\r\n> a snowman: position 44-863\r\n> a fire: position 5-911\r\n(position described as patch indices)\r\n\r\nThis information is given (with the default value `cleanup_and_extract=True` for `post_process_generation`) as:\r\n\r\n- `clean text`: An image of a snowman warming himself by a fire.\r\n- `entities`: [('a snowman', (12, 21), [(0.390625, 0.046875, 0.984375, 0.828125)]), ('a fire', (41, 47), [(0.171875, 0.015625, 0.484375, 0.890625)])]\r\n(the patch indices are converted to coordinates)\r\n\r\n~~I will provide a more complete post-processing function though~~ .\r\n\r\nPlease share your feedback on this (remote) model 🙏 ❤️ !\r\n\r\n**Note that if this model would be added into `transformers` codebase, there might be some changes which I could not guarantee it won't break the current behavior.**\r\n\r\n### Example\r\n\r\n```python\r\nimport requests\r\n\r\nfrom PIL import Image\r\nfrom transformers import AutoProcessor, AutoModelForVision2Seq\r\n\r\n\r\nmodel = AutoModelForVision2Seq.from_pretrained(\"ydshieh/kosmos-2-patch14-224\", trust_remote_code=True)\r\nprocessor = AutoProcessor.from_pretrained(\"ydshieh/kosmos-2-patch14-224\", trust_remote_code=True)\r\n\r\nprompt = \"<grounding>An image of\"\r\n\r\nurl = \"https://huggingface.co/ydshieh/kosmos-2-patch14-224/resolve/main/snowman.jpg\"\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\n\r\n# The original Kosmos-2 demo saves the image first then reload it. For some images, this will give slightly different image input and change the generation outputs.\r\n# Uncomment the following 2 lines if you want to match the original demo's outputs.\r\n# (One example is the `two_dogs.jpg` from the demo)\r\n# image.save(\"new_image.jpg\")\r\n# image = Image.open(\"new_image.jpg\")\r\n\r\ninputs = processor(text=prompt, images=image, return_tensors=\"pt\")\r\n\r\ngenerated_ids = model.generate(\r\n pixel_values=inputs[\"pixel_values\"],\r\n input_ids=inputs[\"input_ids\"][:, :-1],\r\n attention_mask=inputs[\"attention_mask\"][:, :-1],\r\n img_features=None,\r\n img_attn_mask=inputs[\"img_attn_mask\"][:, :-1],\r\n use_cache=True,\r\n max_new_tokens=64,\r\n)\r\ngenerated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]\r\n\r\n# Specify `cleanup_and_extract=False` in order to see the raw model generation.\r\nprocessed_text = processor.post_process_generation(generated_text, cleanup_and_extract=False)\r\n\r\nprint(processed_text)\r\n# `<grounding> An image of<phrase> a snowman</phrase><object><patch_index_0044><patch_index_0863></object> warming himself by<phrase> a fire</phrase><object><patch_index_0005><patch_index_0911></object>.`\r\n\r\n# By default, the generated text is cleanup and the entities are extracted.\r\nprocessed_text, entities = processor.post_process_generation(generated_text)\r\n\r\nprint(processed_text)\r\n# `An image of a snowman warming himself by a fire.`\r\n\r\nprint(entities)\r\n# `[('a snowman', (12, 21), [(0.390625, 0.046875, 0.984375, 0.828125)]), ('a fire', (41, 47), [(0.171875, 0.015625, 0.484375, 0.890625)])]`\r\n\r\n```\r\n\r\n### Draw the bounding bboxes of the entities on the image \r\n\r\nOnce you have the entities, you can use [this helper function](https://huggingface.co/ydshieh/kosmos-2-patch14-224/blob/main/README.md#draw-the-bounding-bboxes-of-the-entities-on-the-image) to draw their bounding bboxes on the image.\r\n\r\n\r\n\r\n",
"Amazing work. Thank you.",
"Can this model be used for Q&A?",
"I am also trying to see what this model can do - in the paper, it can do more things than what the demo demonstrates",
"@ydshieh thanks again for your effort!!!\r\n\r\n@Rajmehta123 VQA is supported only to change the prompts.",
"@BIGBALLON \r\n\r\nYes, this model seems to be capable of doing quite different things, but it's challenging to showing this in a demo.\r\nI am still looking what I can add, but please share your ideas too if any 🙏 ",
"Hi, @ydshieh , \r\n\r\nthere are some suggestions: as for gradio app demo, we can provide two outputs, `Text` and `Image` \r\n\r\n- for Visual Grounding task: use `<grounding>` with your grounding question for prompt, and the output image -> `Image`\r\n- for VQA task: do not use <grounding>, and only descript your question, then the output text -> `Text`\r\n\r\nthe keypoint is prompt, check this for more detials : https://github.com/BIGBALLON/kosmos-2-gd\r\n\r\n",
"Hi everyone!\r\n\r\nI put a `Tasks` section in the README.md file\r\n\r\nhttps://huggingface.co/ydshieh/kosmos-2-patch14-224/blob/main/README.md#tasks\r\n\r\n@BIGBALLON For VQA, I also use `<grounding>`, just like the official demo for image captioning uses `<grounding>`. It works well however.",
"@ydshieh Any plan for supporting beam search in text-generaton?",
"The beam search is already supported in text-generaton for a long time. For this model, its default is beam size = 3.",
"> The beam search is already supported in text-generaton for a long time. For this model, its default is beam size = 3.\r\n\r\nI get this error while trying to use `num_beams = 3`. If set `use_cache=False`, the issue resolves, but the generation becomes 50x slower.\r\n\r\nNotImplementedError: Make sure that a `_reorder_cache` function is correctly implemented in transformers_modules.ydshieh.kosmos-2-patch14-224.48e3edebaeb02dc9fe105f40e85a43a3b440dc72.modeling_kosmos2 to enable beam search for <class 'transformers_modules.ydshieh.kosmos-2-patch14-224.48e3edebaeb02dc9fe105f40e85a43a3b440dc72.modeling_kosmos2.Kosmos2TextForCausalLM'>\r\n\r\nThe full args for generation below.\r\n```\r\nmodel_name == \"kosmos2\":\r\n generated_ids = model.generate(\r\n pixel_values=batch[\"pixel_values\"].to(\"cuda\"),\r\n input_ids=batch[\"input_ids\"][:, :-1].to(\"cuda\"),\r\n attention_mask=batch[\"attention_mask\"][:, :-1].to(\"cuda\"),\r\n img_features=None,\r\n img_attn_mask=batch[\"img_attn_mask\"][:, :-1].to(\"cuda\"),\r\n max_new_tokens=args.max_length,\r\n length_penalty=args.length_penalty,\r\n num_beams=args.num_beams,\r\n )\r\n```",
"I copy and paste this snippet from `llama` to `kosmos2`. Now working fine.\r\n\r\nhttps://github.com/huggingface/transformers/blob/62b20c9ecd6c9d2295265187a51ba0ea74ce046c/src/transformers/models/llama/modeling_llama.py#L901",
"Thank you for reporting. I will take a look, kinda strange here. Note there will be an official port in `transformers` soon, and my personal code on the Hub won't be the best place to use this model.",
"> Thank you for reporting. I will take a look, kinda strange here. Note there will be an official port in `transformers` soon, and my personal code on the Hub won't be the best place to use this model.\r\n\r\nThanks, I will wait for the official support then!",
"I'm getting an error on batch inference. \r\n\r\n```\r\n model = AutoModelForVision2Seq.from_pretrained(model_name_or_path, torch_dtype=autocast_dtype, device_map=\"auto\")\r\n processor = AutoProcessor.from_pretrained(model_name_or_path)\r\n processor.tokenizer.padding_side = \"left\"\r\n \r\n ----\r\n encoding = processor(\r\n images=processed_batch[\"image\"],\r\n text=processed_batch[\"prompted_question\"],\r\n padding=True,\r\n return_tensors=\"pt\",\r\n)\r\n```\r\n```\r\n File \"/path/site-packages/transformers/tokenization_utils_base.py\", line 720, in as_tensor\r\n return torch.tensor(value)\r\nValueError: expected sequence of length 90 at dim 1 (got 89)\r\n\r\nThe above exception was the direct cause of the following exception:\r\n-------\r\n raise ValueError(\r\nValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`input_ids` in this case) have excessive nesting (inputs type `list` where type `int` is expected).\r\n```",
"Hi @rabiulcste Thank you for reporting.\r\n\r\n\r\nCould you provide a complete code snippet? There are missing variable definitions above and I can't run it directly. Thank you!"
] | 1,688 | 1,699 | null |
NONE
| null |
### Model description
Kosmos-2 is a grounded multimodal large language model, which integrates grounding and referring capabilities compared with Kosmos-1. The model can accept image regions selected by the user using bounding boxes as input, provide visual answers (i.e., bounding boxes), and ground the text output to the visual world.
**Is there any plan to add this model to the transformers.**
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Code: https://github.com/microsoft/unilm/tree/master/kosmos-2
Paper: https://arxiv.org/abs/2306.14824
Weight: the checkpoint can be downloaded from [here](https://conversationhub.blob.core.windows.net/beit-share-public/kosmos-2/kosmos-2.pt?sv=2021-10-04&st=2023-06-08T11%3A16%3A02Z&se=2033-06-09T11%3A16%3A00Z&sr=c&sp=r&sig=N4pfCVmSeq4L4tS8QbrFVsX6f6q844eft8xSuXdxU48%3D)
VQA demo: [here](https://github.com/BIGBALLON/kosmos-2-gd)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24671/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24671/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/24670
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24670/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24670/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24670/events
|
https://github.com/huggingface/transformers/issues/24670
| 1,789,893,561 |
I_kwDOCUB6oc5qr5u5
| 24,670 |
Unable to Get Decoded Output from Whisper
|
{
"login": "as1078",
"id": 97714332,
"node_id": "U_kgDOBdMAnA",
"avatar_url": "https://avatars.githubusercontent.com/u/97714332?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/as1078",
"html_url": "https://github.com/as1078",
"followers_url": "https://api.github.com/users/as1078/followers",
"following_url": "https://api.github.com/users/as1078/following{/other_user}",
"gists_url": "https://api.github.com/users/as1078/gists{/gist_id}",
"starred_url": "https://api.github.com/users/as1078/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/as1078/subscriptions",
"organizations_url": "https://api.github.com/users/as1078/orgs",
"repos_url": "https://api.github.com/users/as1078/repos",
"events_url": "https://api.github.com/users/as1078/events{/privacy}",
"received_events_url": "https://api.github.com/users/as1078/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey @as1078 - did you pre-process your inputs according to the function `prepare_dataset`? See https://huggingface.co/blog/fine-tune-whisper#prepare-data\r\n\r\nI observe that your dataset has two columns present: `audio` and `sentence`. These are both columns corresponding to raw audio input data and raw target text data. As explained in the blog post / Colab, we need to pre-process the (audio, text) data to (log-mel spectrograms, token ids) respectively.\r\n\r\nYou should be able to run evaluation simply by pre-processing your dataset as-per the instructions provided and then passing the pre-processed dataset to the `trainer`.\r\n\r\nIf you want a more streamlined version of a Whisper evaluation script, I recommend you check out: https://github.com/huggingface/community-events/tree/main/whisper-fine-tuning-event#evaluation\r\n\r\nYou should just be able to specify your model id and dataset metadata and run evaluation directly",
"Yes, the data was preprocessed according to the `prepare_dataset` function. However, `trainer.evaluate()` still gave those errors. If I am running the script that you sent a link to, does the model checkpoint need to have been saved after training? Also, I have limited GPU access even with Google Colab Pro, so would saving checkpoints be a better way to save computational resources?",
"Hey @as1078 - could you provide an end-to-end reproducible code snippet to run your script? It would be helpful in checking that all the pre-processing steps have been applied correctly (the fact that we're seeing `audio` and `sentence` in your dataset means something has gone wrong!)\r\n\r\nThe script can either use a `model_id` for a checkpoint on the Hub (e.g. `\"openai/whisper-small\"` for the pre-trained small Whisper checkpoint, or the path to a locally saved checkpoint (e.g. if you set your save directory to `./my-model`, set `model_id=./my-model` in the training arguments)",
"Yes, I can. Here is the link to my colab file: https://colab.research.google.com/drive/10NaxWZtQgaYMN2fTnbRNNqV2baGkJVGG?usp=sharing. \r\nThe data directory is linked here: https://drive.google.com/drive/folders/1-3WqzbKH4ZFUm0r2rYwi2f7Wyw64bYa1?usp=sharing. Here is the link to the CSV file with the data files listed:\r\n[audio_new.csv](https://github.com/huggingface/transformers/files/12023463/audio_new.csv)\r\n",
"Hey @as1078 - thanks for sharing your script. It looks largely correct, but I can't run it since the data is saved in your Google Drive, so I can't link it to my Colab runtime without downloading it all. Could you perhaps load your dataset, and then push it to the Hub with:\r\n```python\r\ncommon_voice.push_to_hub(\"stuttering_asr\")\r\n```\r\n\r\nThis will create the dataset under your namespace, which will then allow me to run your script by streaming the data from the Hub.\r\n\r\nIf you're just interested in evaluation, there's a lightweight script [here](https://github.com/huggingface/community-events/tree/main/whisper-fine-tuning-event#evaluation) that you can use that will do all the pre-processing for you and not require the HF Trainer.",
"Yes, of course. The notebook has been updated with this: [https://colab.research.google.com/drive/10NaxWZtQgaYMN2fTnbRNNqV2baGkJVGG?usp=sharing](url). I can also try the evaluation script, but it would be great if you could take a look at the data and give me any feedback. Thanks so much for the help.",
"Hey @as1078 - I'm still not able to reproduce your script unfortunately. The dataset that you have pushed contains an `audio` column that is the absolute **path** to a local audio file, rather than an audio file itself. See the dataset viewer to inspect the first 100 examples: https://huggingface.co/datasets/amansingh203/stuttering_asr_dataset/viewer/amansingh203--stuttering_asr_dataset/train?row=0\r\n\r\nCould you first load your dataset as an audio dataset, and then push it to the Hub? This way, the audio files will be pushed, and subsequently I'll be able to load them locally. You can follow these steps for doing so: https://huggingface.co/docs/datasets/audio_load\r\n\r\nOnce you've done this, simply push to Hub:\r\n```python\r\ndataset.push_to_hub(\"stuttering_asr\")\r\n```",
"Hi @sanchit-gandhi . The data was loaded into my Hugging Face data account (I pushed it to the Hub), where the audio is now stored. Let me know if you have any issues accessing it. I was able to resolve the issues when I ran `trainer.evaluate()` (I was not using a `Sequence2SequenceTrainer`). However, when I generate transcripts, some of them are not in English, even though the tokenizer is set to transcribe English. Was wondering if this was an issue with the code, or if the model needs more epochs to run.",
"Hey @as1078 - nice work on figuring out the issue!\r\n\r\n> when I generate transcripts, some of them are not in English\r\n\r\nSince you're fine-tuning on an English-only dataset, it makes sense to use an English-only checkpoint as your starting point. See the table [here](https://huggingface.co/blog/fine-tune-whisper#introduction) for details. If doing this, ensure that you **do not** specify the language or task arguments to the tokenizer and processor - these are not required for English-only fine-tuning!\r\n\r\nIn short, you can swap `openai/whisper-small` for `openai/whisper-small.en` everywhere in your script, and remove all the `language` and `task` arguments",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Going to close this one as the original issue has been resolved, and a fix for the follow-up issue proposed in my last comment. Feel free to re-open this issue if you have a problem with the Trainer @as1078, or open a new issue if your transcriptions are not looking accurate (e.g. with the language)."
] | 1,688 | 1,693 | 1,693 |
NONE
| null |
### System Info
- `transformers` version: 4.31.0.dev0
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.16.2
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.11 (gpu)
- Jax version: 0.4.10
- JaxLib version: 0.4.10
- Using GPU in script?: Yes (nvidia a100-sxm4-40gb)
- Using distributed or parallel set-up in script?: parallel
### Who can help?
speech model: @sanchit-gandhi
tokenizer: @ArthurZucker
trainer: @sgugger
PyTorch: @sgugger
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
All preprocessing steps for the data were the same as the following notebook: https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/fine_tune_whisper.ipynb.
Training the data was able to yield results with proper metrics for WER, but using `trainer.evaluate()` led to an error as transcripts were unable to be generated.
```
dataset = dataset_dict['train']
dataset = dataset.train_test_split(test_size=0.25)
print(dataset)
DatasetDict({
train: Dataset({
features: ['audio', 'sentence'],
num_rows: 1750
})
test: Dataset({
features: ['audio', 'sentence'],
num_rows: 584
})
})
from transformers import Seq2SeqTrainingArguments
training_args = Seq2SeqTrainingArguments(
output_dir="./whisper-small-hi", # change to a repo name of your choice
per_device_train_batch_size=16,
gradient_accumulation_steps=1, # increase by 2x for every 2x decrease in batch size
learning_rate=1e-5,
warmup_steps=500,
max_steps=4000,
gradient_checkpointing=True,
fp16=True,
evaluation_strategy="steps",
per_device_eval_batch_size=8,
predict_with_generate=True,
generation_max_length=225,
save_steps=1000,
eval_steps=1000,
logging_steps=25,
report_to=["tensorboard"],
load_best_model_at_end=True,
metric_for_best_model="wer",
greater_is_better=False,
push_to_hub=True,
)
from transformers import Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=common_voice['train'],
eval_dataset=common_voice['test'],
data_collator=data_collator,
tokenizer=processor.feature_extractor,
compute_metrics=compute_metrics,
)
trainer.evaluate()
in <cell line: 1>:1 │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:2945 in evaluate │
│ │
│ 2942 │ │ start_time = time.time() │
│ 2943 │ │ │
│ 2944 │ │ eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else se │
│ ❱ 2945 │ │ output = eval_loop( │
│ 2946 │ │ │ eval_dataloader, │
│ 2947 │ │ │ description="Evaluation", │
│ 2948 │ │ │ # No point gathering the predictions if there are no metrics, otherwise we d │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:3227 in evaluation_loop │
│ │
│ 3224 │ │ │ │ │ EvalPrediction(predictions=all_preds, label_ids=all_labels, inputs=a │
│ 3225 │ │ │ │ ) │
│ 3226 │ │ │ else: │
│ ❱ 3227 │ │ │ │ metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, lab │
│ 3228 │ │ else: │
│ 3229 │ │ │ metrics = {} │
│ 3230 │
│ in compute_metrics:13 │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py:3490 in │
│ batch_decode │
│ │
│ 3487 │ │ Returns: │
│ 3488 │ │ │ `List[str]`: The list of decoded sentences. │
│ 3489 │ │ """ │
│ ❱ 3490 │ │ return [ │
│ 3491 │ │ │ self.decode( │
│ 3492 │ │ │ │ seq, │
│ 3493 │ │ │ │ skip_special_tokens=skip_special_tokens, │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py:3491 in │
│ <listcomp> │
│ │
│ 3488 │ │ │ `List[str]`: The list of decoded sentences. │
│ 3489 │ │ """ │
│ 3490 │ │ return [ │
│ ❱ 3491 │ │ │ self.decode( │
│ 3492 │ │ │ │ seq, │
│ 3493 │ │ │ │ skip_special_tokens=skip_special_tokens, │
│ 3494 │ │ │ │ clean_up_tokenization_spaces=clean_up_tokenization_spaces, │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/models/whisper/tokenization_whisper.py:592 │
│ in decode │
│ │
│ 589 │ │ Returns: │
│ 590 │ │ │ `str`: The decoded sentence. │
│ 591 │ │ """ │
│ ❱ 592 │ │ text = super().decode( │
│ 593 │ │ │ token_ids, │
│ 594 │ │ │ skip_special_tokens=skip_special_tokens, │
│ 595 │ │ │ clean_up_tokenization_spaces=clean_up_tokenization_spaces, │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py:3530 in decode │
│ │
│ 3527 │ │ # Convert inputs to python lists │
│ 3528 │ │ token_ids = to_py_obj(token_ids) │
│ 3529 │ │ │
│ ❱ 3530 │ │ return self._decode( │
│ 3531 │ │ │ token_ids=token_ids, │
│ 3532 │ │ │ skip_special_tokens=skip_special_tokens, │
│ 3533 │ │ │ clean_up_tokenization_spaces=clean_up_tokenization_spaces, │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/models/whisper/tokenization_whisper.py:619 │
│ in _decode │
│ │
│ 616 │ │ │ decoder_start_token_id = self.convert_tokens_to_ids("<|startoftranscript|>") │
│ 617 │ │ │ token_ids = self._strip_prompt(token_ids, prompt_token_id, decoder_start_tok │
│ 618 │ │ │
│ ❱ 619 │ │ filtered_tokens = self.convert_ids_to_tokens(token_ids, skip_special_tokens=skip │
│ 620 │ │ │
│ 621 │ │ # To avoid mixing byte-level and unicode for byte-level BPT │
│ 622 │ │ # we need to build string separately for added tokens and byte-level tokens │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils.py:906 in │
│ convert_ids_to_tokens │
│ │
│ 903 │ │ │ │ return self._convert_id_to_token(ids) │
│ 904 │ │ tokens = [] │
│ 905 │ │ for index in ids: │
│ ❱ 906 │ │ │ index = int(index) │
│ 907 │ │ │ if skip_special_tokens and index in self.all_special_ids: │
│ 908 │ │ │ │ continue │
│ 909 │ │ │ if index in self.added_tokens_decoder: │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
TypeError: int() argument must be a string, a bytes-like object or a real number, not 'list'
```
### Expected behavior
I would expect `trainer.evaluate()` to return proper metrics (validation loss and WER) along with generated transcripts for each of the samples fed into the Whisper model.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24670/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24670/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24669
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24669/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24669/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24669/events
|
https://github.com/huggingface/transformers/pull/24669
| 1,789,864,720 |
PR_kwDOCUB6oc5UuIHD
| 24,669 |
Add Nucleotide Transformer notebooks and restructure notebook list
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,688 | 1,688 | 1,688 |
MEMBER
| null |
As the name suggests, this adds links to the recent Nucleotide Transformer notebooks in the main `transformers` docs! It also restructures the notebooks list - right now the `Other` list is just full of bio models, so I moved them into their own section.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24669/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24669/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24669",
"html_url": "https://github.com/huggingface/transformers/pull/24669",
"diff_url": "https://github.com/huggingface/transformers/pull/24669.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24669.patch",
"merged_at": 1688578127000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24668
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24668/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24668/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24668/events
|
https://github.com/huggingface/transformers/pull/24668
| 1,789,569,001 |
PR_kwDOCUB6oc5UtHjH
| 24,668 |
updating _compute_mask_indices fn to work with torch compile
|
{
"login": "Kirandevraj",
"id": 10723538,
"node_id": "MDQ6VXNlcjEwNzIzNTM4",
"avatar_url": "https://avatars.githubusercontent.com/u/10723538?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kirandevraj",
"html_url": "https://github.com/Kirandevraj",
"followers_url": "https://api.github.com/users/Kirandevraj/followers",
"following_url": "https://api.github.com/users/Kirandevraj/following{/other_user}",
"gists_url": "https://api.github.com/users/Kirandevraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Kirandevraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kirandevraj/subscriptions",
"organizations_url": "https://api.github.com/users/Kirandevraj/orgs",
"repos_url": "https://api.github.com/users/Kirandevraj/repos",
"events_url": "https://api.github.com/users/Kirandevraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/Kirandevraj/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24668). All of your documentation changes will be reflected on that endpoint.",
"\r\n> And also add a new test to check that compiling the forward call works when we have spec aug activated\r\n\r\nWe already have a test case ```test_mask_time_prob_ctc``` that check the forward call with spec aug activated. \r\nDo you mean when using compile mode - we want to have a test case?",
"> Do you mean when using compile mode - we want to have a test case?\r\n\r\nYes please - one test to make sure this PR gives the expected behaviour would be grand!",
"I believe we just need an end-to-end test here and then we're good to go right @Kirandevraj?",
"Yes, \r\nAlso other test cases that expects ```np.ndarray``` output from the ```_compute_mask_indices``` have to updated to expect the new output ```torch.Tensor``` from ```_compute_mask_indices```",
"Awesome - this is super close to completion then! Would you like to see it home? Feel free to ping me with any other questions / queries, more than happy to help! ",
"Yes, I am working on the test case. let me share it soon.",
"When I use ```sampled_negative_indices``` in test cases. I am facing the following error from the last line.\r\n```AssertionError('Loop-carried variable _tmp1 has initial type <[1, 2048], int1> but is re-assigned to <[1, 2048], int8> in loop! Please make sure that the type stays consistent.')``` error by the 'inductor'. \r\nSample code: \r\n```\r\nimport torch\r\nfrom transformers import AutoFeatureExtractor, Wav2Vec2ForPreTraining\r\nfrom transformers.models.wav2vec2.modeling_wav2vec2 import _compute_mask_indices, _sample_negative_indices\r\nfrom datasets import load_dataset\r\n\r\nif torch.cuda.is_available():\r\n device = torch.device(\"cuda\")\r\n print(\"CUDA is available\")\r\nelse:\r\n device = torch.device(\"cpu\")\r\n print(\"CUDA is not available\")\r\n\r\n\r\nfeature_extractor = AutoFeatureExtractor.from_pretrained(\"facebook/wav2vec2-base\")\r\nmodel = Wav2Vec2ForPreTraining.from_pretrained(\"facebook/wav2vec2-base\")\r\nmodel.to(device)\r\n\r\nds = load_dataset(\"hf-internal-testing/librispeech_asr_dummy\", \"clean\", split=\"validation\")\r\ninput_values = feature_extractor(ds[0][\"audio\"][\"array\"], return_tensors=\"pt\").input_values.to(device) # Batch size 1\r\n\r\n# compute masked indices\r\nbatch_size, raw_sequence_length = input_values.shape\r\nsequence_length = model._get_feat_extract_output_lengths(raw_sequence_length).item()\r\nmask_time_indices = _compute_mask_indices(shape=(batch_size, sequence_length), mask_prob=0.2, mask_length=2)\r\n\r\nsampled_negative_indices = _sample_negative_indices(\r\nfeatures_shape=(batch_size, sequence_length),\r\nnum_negatives=model.config.num_negatives,\r\nmask_time_indices=mask_time_indices,\r\n)\r\n\r\nmask_time_indices = torch.tensor(data=mask_time_indices, device=input_values.device, dtype=torch.long)\r\nsampled_negative_indices = torch.tensor(data=sampled_negative_indices, device=input_values.device, dtype=torch.long)\r\n\r\ncompiled_model = torch.compile(model)\r\noutputs = compiled_model(input_values,mask_time_indices=mask_time_indices, sampled_negative_indices=sampled_negative_indices)\r\n```\r\nI believe this is from ```if sampled_negative_indices is not None:``` sub-block code snippet in forward function from wav2vec2. \r\nShould we go about fixing it to make it work with torch compile or can we create test case without using the ```sampled_negative_indices```\r\nCan you direct me where the assertion error is getting raised to inquire.",
"Do you have a full stack trace @Kirandevraj? This should give us a clue as to where the dtype is changing. IMO it could be worth going through the `sampled_negative_indices` function with a debugger and just checking that the input/output dtypes are consistent. The error message suggests we're doing an un-intentional upcast/downcast somewhere in the function",
"The above error was from ```neg_is_pos.any()```. I updated the code to ```neg_is_pos.sum() > 0``` to work with torch compile.\r\nI have added the test case ```test_torch_compiled_model_for_pretraining``` taking inspiration from ```test_model_for_pretraining```. I have updated few inplace operation with ```torch.where``` function. ",
"Awesome, sounds great! Let me know when you'd like me to take a second look at the code + tests! Getting close to merge here!",
"These are the four test errors that I am facing in my brach and the main branch in my system and all the other test cases are passing from the test_modeling_wav2vec2.py\r\n```\r\nFAILED tests/models/wav2vec2/test_modeling_wav2vec2.py::Wav2Vec2ModelTest::test_initialization - AssertionError: 0.5709530115127563 not found in [0.0, 1.0] : Parameter wav2vec2.encoder.pos_conv_embed.conv.parametrizations.weight.original0 of model <class 'transformers.models.wav2vec2.modeling_wav2vec2.Wa...\r\nFAILED tests/models/wav2vec2/test_modeling_wav2vec2.py::Wav2Vec2ModelTest::test_torch_fx - AssertionError: Couldn't trace module: symbolically traced variables cannot be used as inputs to control flow\r\nFAILED tests/models/wav2vec2/test_modeling_wav2vec2.py::Wav2Vec2ModelTest::test_torch_fx_output_loss - AssertionError: Couldn't trace module: symbolically traced variables cannot be used as inputs to control flow\r\nFAILED tests/models/wav2vec2/test_modeling_wav2vec2.py::Wav2Vec2RobustModelTest::test_initialization - AssertionError: 0.585796058177948 not found in [0.0, 1.0] : Parameter wav2vec2.encoder.pos_conv_embed.conv.parametrizations.weight.original0 of model <class 'transformers.models.wav2vec2.modeling_wav2vec2.Wav...\r\n```\r\nCan you take a look.",
"Hey @Kirandevraj! The first two errors look like model initialisation errors (the dynamic range of the weights isn't in the expected range). These should be quite easy to fix as a last step, but I don't think we've changed anything in the state dict or the initialisation process, so shouldn't expect them to change?\r\n\r\nThe other two failing tests suggest that the model is not working with torch fx tracing. Could you try running the model with trace enabled?\r\n\r\n```python\r\nfrom transformers import Wav2Vec2Model, Wav2Vec2FeatureExtractor\r\nfrom transformers.utils.fx import symbolic_trace\r\nimport numpy as np\r\n\r\nmodel = Wav2Vec2Model.from_pretrained(\"hf-internal-testing/tiny-random-wav2vec2\")\r\nfeature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(\"hf-internal-testing/tiny-random-wav2vec2\")\r\n\r\nraw_speech = np.ones(16000)\r\n\r\ninputs = feature_extractor(raw_speech, sampling_rate=16000, return_tensors=\"pt\")\r\ninput_names = list(inputs.keys())\r\n\r\ntraced_model = symbolic_trace(model, input_names)\r\ntraced_output = traced_model(inputs)\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,697 | 1,697 |
NONE
| null |
fixes #22849
The inplace operations are replaced with out-of-place ones to fix the torch compile computational graph breakage.
This method converts the numpy operations to torch operations in _compute_mask_indices function.
The _compute_mask_indices is used when using SpecAugmentation in wav2vec2 training.
@sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24668/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24668/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24668",
"html_url": "https://github.com/huggingface/transformers/pull/24668",
"diff_url": "https://github.com/huggingface/transformers/pull/24668.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24668.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24667
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24667/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24667/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24667/events
|
https://github.com/huggingface/transformers/pull/24667
| 1,789,476,142 |
PR_kwDOCUB6oc5UszFH
| 24,667 |
Unpin `huggingface_hub`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you @ydshieh !"
] | 1,688 | 1,688 | 1,688 |
COLLABORATOR
| null |
# What does this PR do?
- As the release `0.16` is out today.
- Also, use `--upgrade-strategy eager` in `pip install` which is required to respect [this comment](https://github.com/huggingface/transformers/pull/24424#pullrequestreview-1493647494).
The default `-U` (which is associated with `only-if-needed`) won't upgrade to all available new versions. See [the doc](https://pip.pypa.io/en/stable/development/architecture/upgrade-options/#controlling-what-gets-installed):
> packages are only upgraded if they are named in the pip command or a requirement file (i.e, they are direct requirements), or an upgraded parent needs a later version of the dependency than is currently installed.
- Since some packages are upgraded, let's change the cache version number for the new versions could be included in the cache.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24667/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24667/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24667",
"html_url": "https://github.com/huggingface/transformers/pull/24667",
"diff_url": "https://github.com/huggingface/transformers/pull/24667.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24667.patch",
"merged_at": 1688568550000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24666
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24666/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24666/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24666/events
|
https://github.com/huggingface/transformers/pull/24666
| 1,789,290,796 |
PR_kwDOCUB6oc5UsKpe
| 24,666 |
Whisper: fix prompted max length
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@amyeroberts @sanchit-gandhi \r\n\r\nAfter the latest changes, a warning is emitted when we cross `config.max_position_embeddings` for the first time.\r\n\r\nFor instance, if you now run \r\n\r\n```py\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"distilgpt2\")\r\nmodel = AutoModelForCausalLM.from_pretrained(\"distilgpt2\").to(\"cuda\")\r\n\r\ninputs = tokenizer([\"The quick brown\"], return_tensors=\"pt\").to(\"cuda\")\r\n# distilgpt2 has a maximum length of 1024\r\ngen_out = model.generate(**inputs, do_sample=True, eos_token_id=-1, max_length=1025)\r\n```\r\n\r\nYou'll see\r\n\r\n```\r\nThis is a friendly reminder - the current text generation call will exceed the model's predefined maximum length (1024). Depending on the model, you may observe exceptions, performance degradation, or nothing at all.\r\n```\r\n\r\n(And if you set `max_length=1026`, you'll see the warning right before the exceptions. This is because we can technically generate `config.max_position_embeddings + 1` tokens even with restrictive position embeddings, although we shouldn't!)"
] | 1,688 | 1,689 | 1,688 |
MEMBER
| null |
# What does this PR do?
Fixes #24600
#23724 Added the ability to guide generation with Whiper through `prompt_ids`. It was increasing the generation length by the length of the prompt -- these tokens were being hardcoded, and thus "not generated".
However, in the default case, we were already setting the generation length to the maximum allowed model length (see [model config](https://huggingface.co/openai/whisper-large-v2/blob/main/config.json#L42)). This increment was forcing us to go behind the maximum length and, because the model uses a `nn.Embedding` for the position embedding, indexing exceptions started popping up on long audio inputs :D
This PR modifies the length extension to what I believe was the author's original goal: only increment the length if `max_new_tokens` is passed. By default, this argument is not set and should correspond to the "new" (=non-prompt) generated tokens.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24666/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24666/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24666",
"html_url": "https://github.com/huggingface/transformers/pull/24666",
"diff_url": "https://github.com/huggingface/transformers/pull/24666.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24666.patch",
"merged_at": 1688749898000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24665
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24665/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24665/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24665/events
|
https://github.com/huggingface/transformers/issues/24665
| 1,789,206,843 |
I_kwDOCUB6oc5qpSE7
| 24,665 |
Add ELECTRA/DeBERTa v3 pretraining script (replaced token detection pretraining)
|
{
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey @BramVanroy really good idea!\r\n\r\nI think a good start would be the codebasis of latest CamemBERTa model. It uses own DeBERTa v3 pretraining code (modified from the ELECTRA implementation from NVIDIA). In general, DeBERTa v3 uses Gradient-Disentangled Embedding Sharing (GDES) in pretraining compared to v2, which is also implemented in CamemBERTa repository.\r\n\r\nRepo is here: https://github.com/WissamAntoun/CamemBERTa\r\n\r\n@WissamAntoun is the first author of CamemBERTa paper and also active here :hugs: ",
"Great find @stefan-it! I see that the code is modified from the [NVIDIA repo](https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow2/LanguageModeling/ELECTRA). That's probably a great starting point. Personally, I'd also like to see a `torch` equivalent (which I might work on if no one else picks this up).",
"Hey @BramVanroy ,\r\n\r\nregarding to ELECTRA and PyTorch I recently discovered this repo:\r\n\r\nhttps://github.com/ficstamas/charmen-electra\r\n\r\nIt implements a kind of Charformer with ELECTRA, but ELECTRA pretraining is also supported. This could be also a good start and interesting for a PyTorch reference, I'm currently testing the ELECTRA Charformer approach :)\r\n\r\n(/cc @ficstamas who is maintainer of that repo :hugs: )",
"Hey,\r\n\r\n@stefan-it Thanks for the cc!\r\n\r\nThere is an [unofficial](https://github.com/richarddwang/electra_pytorch/tree/master) implementation of ELECTRA which can be a good starting point for you. I used this repository as a reference to make my own. \r\n\r\nAlso here is a more documented, stripped down version of [my implementation*](https://gist.github.com/ficstamas/263435c924abdd7f742d9925ab12b0d1) if you need it.\r\n\r\n*In this example, I initialized it from a checkpoint, but you can initialize it however you like.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Are there no plans to add official RTD/Deberta-V3 pretraining support? "
] | 1,688 | 1,706 | 1,691 |
COLLABORATOR
| null |
### Feature request
It would be welcome to add a pretraining script for the replaced token detection task that [ELECTRA](https://github.com/google-research/electra) and, later, [DeBERTa v3](https://github.com/microsoft/DeBERTa/tree/master/experiments/language_model#pre-training-with-replaced-token-detection-task) were trained on.
Note that DeBERTa v3 models are actually of type [DeBERTa v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2) under the hood according to [the config file](https://huggingface.co/microsoft/deberta-v3-large/blob/main/config.json#L2).
### Motivation
While DeBERTa v3 and especially ELECTRA are quite "old" in terms of LLM life spans, for completeness' sake it could be worthwhile to add a training example that is fully compatible with all the recent developments in the HF eco system (accelerate, peft, datasets, evaluate etc.).
### Your contribution
Depending on the interest and my own time I can either review or contribute to this as well.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24665/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24665/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24664
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24664/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24664/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24664/events
|
https://github.com/huggingface/transformers/pull/24664
| 1,789,077,470 |
PR_kwDOCUB6oc5UrciX
| 24,664 |
🌐 [i18n-KO] Fixed Korean and English `quicktour.md`
|
{
"login": "wonhyeongseo",
"id": 29195190,
"node_id": "MDQ6VXNlcjI5MTk1MTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/29195190?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wonhyeongseo",
"html_url": "https://github.com/wonhyeongseo",
"followers_url": "https://api.github.com/users/wonhyeongseo/followers",
"following_url": "https://api.github.com/users/wonhyeongseo/following{/other_user}",
"gists_url": "https://api.github.com/users/wonhyeongseo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wonhyeongseo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wonhyeongseo/subscriptions",
"organizations_url": "https://api.github.com/users/wonhyeongseo/orgs",
"repos_url": "https://api.github.com/users/wonhyeongseo/repos",
"events_url": "https://api.github.com/users/wonhyeongseo/events{/privacy}",
"received_events_url": "https://api.github.com/users/wonhyeongseo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger, @ArthurZucker, @eunseojo May you please review this PR?\r\nThank you so much for your support!"
] | 1,688 | 1,689 | 1,689 |
CONTRIBUTOR
| null |
<!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.md` to Korean" 으로 부탁드립니다 -->
# What does this PR do?
Updated and fixed some issues on the `quicktour.md` file for the Korean and English documentation.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
<!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
<!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
=Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
@sgugger, @ArthurZucker, @eunseojo May you please review this PR?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24664/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24664/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24664",
"html_url": "https://github.com/huggingface/transformers/pull/24664",
"diff_url": "https://github.com/huggingface/transformers/pull/24664.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24664.patch",
"merged_at": 1689941968000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24663
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24663/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24663/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24663/events
|
https://github.com/huggingface/transformers/pull/24663
| 1,789,024,098 |
PR_kwDOCUB6oc5UrRMG
| 24,663 |
Fix `EncodecModelTest::test_multi_gpu_data_parallel_forward`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,688 | 1,688 | 1,688 |
COLLABORATOR
| null |
# What does this PR do?
`test_multi_gpu_data_parallel_forward` requires the batch size to be an even number if the batch dim is not at position 0 in the output shape.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24663/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24663/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24663",
"html_url": "https://github.com/huggingface/transformers/pull/24663",
"diff_url": "https://github.com/huggingface/transformers/pull/24663.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24663.patch",
"merged_at": 1688549866000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24662
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24662/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24662/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24662/events
|
https://github.com/huggingface/transformers/issues/24662
| 1,789,023,930 |
I_kwDOCUB6oc5qola6
| 24,662 |
Loading mT5 checkpoint will load from UMT5 class
|
{
"login": "MattYoon",
"id": 57797966,
"node_id": "MDQ6VXNlcjU3Nzk3OTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/57797966?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MattYoon",
"html_url": "https://github.com/MattYoon",
"followers_url": "https://api.github.com/users/MattYoon/followers",
"following_url": "https://api.github.com/users/MattYoon/following{/other_user}",
"gists_url": "https://api.github.com/users/MattYoon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MattYoon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MattYoon/subscriptions",
"organizations_url": "https://api.github.com/users/MattYoon/orgs",
"repos_url": "https://api.github.com/users/MattYoon/repos",
"events_url": "https://api.github.com/users/MattYoon/events{/privacy}",
"received_events_url": "https://api.github.com/users/MattYoon/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @ArthurZucker ",
"Hey! Indeed one of our CI test is failing because of that. Looking into it now! ",
"Yep, the issue is that in the `CONFIG_MAPPING_NAMES` `umt5` maps to mt5 (since they have the same configuration file). This is messing with the overall mapping. A custom coming has to be create, or find a way to properly update! 😉 ",
"Hmm. The values in `CONFIG_MAPPING(_NAMES)` is used as keys when creating `MODEL_MAPPING`. We should remove the entries of `umt5` in `CONFIG_MAPPING_NAMES` and other mappings.\r\n\r\nThose models should be loaded in a non-auto way. \r\n\r\n",
"We can't just remove every mapping, some of our checks and doc require them. Let's just add a config for UMT5."
] | 1,688 | 1,689 | 1,689 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.31.0.dev0
- Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```Python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained('google/mt5-small')
print(type(model))
#transformers.models.umt5.modeling_umt5.UMT5ForConditionalGeneration
```
### Expected behavior
@ArthurZucker Thank you for the recent integration of umT5. However, from the latest branch of transformers, loading normal mT5 will load from UMT5 class. Of course this does not happen with 4.30.2.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24662/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24662/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24661
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24661/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24661/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24661/events
|
https://github.com/huggingface/transformers/pull/24661
| 1,788,925,128 |
PR_kwDOCUB6oc5Uq71g
| 24,661 |
Fix `VisionTextDualEncoderIntegrationTest`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,688 | 1,688 | 1,688 |
COLLABORATOR
| null |
# What does this PR do?
Need a tiny update in the test files after the PR #24585
So far, CI gets errors like
```bash
RuntimeError: Only Tensors of floating point and complex dtype can require gradients
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24661/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24661/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24661",
"html_url": "https://github.com/huggingface/transformers/pull/24661",
"diff_url": "https://github.com/huggingface/transformers/pull/24661.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24661.patch",
"merged_at": 1688557471000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24660
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24660/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24660/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24660/events
|
https://github.com/huggingface/transformers/pull/24660
| 1,788,878,707 |
PR_kwDOCUB6oc5UqxwP
| 24,660 |
Add is_torch_mps_available function to utils
|
{
"login": "NripeshN",
"id": 86844847,
"node_id": "MDQ6VXNlcjg2ODQ0ODQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/86844847?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NripeshN",
"html_url": "https://github.com/NripeshN",
"followers_url": "https://api.github.com/users/NripeshN/followers",
"following_url": "https://api.github.com/users/NripeshN/following{/other_user}",
"gists_url": "https://api.github.com/users/NripeshN/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NripeshN/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NripeshN/subscriptions",
"organizations_url": "https://api.github.com/users/NripeshN/orgs",
"repos_url": "https://api.github.com/users/NripeshN/repos",
"events_url": "https://api.github.com/users/NripeshN/events{/privacy}",
"received_events_url": "https://api.github.com/users/NripeshN/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @NripeshN, \r\n\r\nThanks for the PR! Could you please fill out the PR description, including the motivation for adding this function? \r\n\r\nFor the style and quality checks, you'll need to run `make style` at the top level of the repo and push any changes. \r\n\r\ncc @ydshieh ",
"Hi @NripeshN \r\n\r\nWould this new `is_torch_mps_available` be used somewhere in `transformers`? Currently, this PR only adds the definition but not using it anywhere.",
"> Hi @NripeshN\r\n> \r\n> Would this new `is_torch_mps_available` be used somewhere in `transformers`? Currently, this PR only adds the definition but not using it anywhere.\r\n\r\nI was planning on creating a new pull request where I'd be using this function in transformers. This function would provide GPU acceleration for apple silicon Macs. \r\n",
"Hi @ydshieh,\r\nI have used is_torch_mps_available in the latest push",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,688 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
Added mps functionality for apple silicon GPU Acceleration.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24660/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24660/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24660",
"html_url": "https://github.com/huggingface/transformers/pull/24660",
"diff_url": "https://github.com/huggingface/transformers/pull/24660.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24660.patch",
"merged_at": 1688565740000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24659
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24659/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24659/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24659/events
|
https://github.com/huggingface/transformers/issues/24659
| 1,788,869,955 |
I_kwDOCUB6oc5qn_1D
| 24,659 |
Add HyenaDNA model
|
{
"login": "tanaymeh",
"id": 26519539,
"node_id": "MDQ6VXNlcjI2NTE5NTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/26519539?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tanaymeh",
"html_url": "https://github.com/tanaymeh",
"followers_url": "https://api.github.com/users/tanaymeh/followers",
"following_url": "https://api.github.com/users/tanaymeh/following{/other_user}",
"gists_url": "https://api.github.com/users/tanaymeh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tanaymeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanaymeh/subscriptions",
"organizations_url": "https://api.github.com/users/tanaymeh/orgs",
"repos_url": "https://api.github.com/users/tanaymeh/repos",
"events_url": "https://api.github.com/users/tanaymeh/events{/privacy}",
"received_events_url": "https://api.github.com/users/tanaymeh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false | null |
[] |
[
"Hi @heytanay, thanks for opening this issue! \r\n\r\nThe easiest and recommended way to make a model available in `transformers` is to add the modeling code directly on the hub: https://huggingface.co/docs/transformers/custom_models\r\n\r\nThis means, once working, the model can be found and used immediately without having to go through the PR process. We find this is a lot quicker as the bar for adding code into the library is high due to the maintenance cost of every new model, and so reviews take quite a while.\r\n\r\nLet us know if you have any questions about how to add a model using this process. Looking forward to seeing this model in action! ",
"Thanks for this @amyeroberts! I will proceed with that!",
"Hi, @heytanay we are also working on adding hyena models to transformers, how far along are you ?",
"@djaym7 As Amy mentioned, I won't be implementing the model directly in transformers and instead will be adding it directly to the hub. If you are doing it / already have done it, please go ahead!"
] | 1,688 | 1,689 | null |
CONTRIBUTOR
| null |
### Model description
HyenaDNA is a long-range genomic foundation model pretrained on context lengths of up to 1 million tokens at single nucleotide resolution.
I would like to add this model to the transformers.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Code: https://github.com/HazyResearch/hyena-dna
Weights: https://huggingface.co/LongSafari
Paper: https://arxiv.org/abs/2306.15794
cc @exnx
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24659/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24659/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/24658
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24658/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24658/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24658/events
|
https://github.com/huggingface/transformers/issues/24658
| 1,788,850,030 |
I_kwDOCUB6oc5qn69u
| 24,658 |
CUDA error: out of memory with zero3 offload
|
{
"login": "pchiang5",
"id": 29170925,
"node_id": "MDQ6VXNlcjI5MTcwOTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/29170925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pchiang5",
"html_url": "https://github.com/pchiang5",
"followers_url": "https://api.github.com/users/pchiang5/followers",
"following_url": "https://api.github.com/users/pchiang5/following{/other_user}",
"gists_url": "https://api.github.com/users/pchiang5/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pchiang5/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pchiang5/subscriptions",
"organizations_url": "https://api.github.com/users/pchiang5/orgs",
"repos_url": "https://api.github.com/users/pchiang5/repos",
"events_url": "https://api.github.com/users/pchiang5/events{/privacy}",
"received_events_url": "https://api.github.com/users/pchiang5/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hello, are you running the notebook as is? or are you running it as a script with distributed launcher such as `deepspeed`/`torchrun`/`accelerate launch`?\r\n\r\nYou can't run DeepSpeed in a notebook. You need to convert the notebook to a script and run the script via a distributed launcher similar to the translation example that you are running",
"Worked!\r\nThank you @pacman100",
"Hi @pacman100\r\n\r\nI encountered another issue. The .py ran fine alone but with deepspeed it encountered `server socket has failed to bind to [::]:29500 (errno: 98 - Address already in use).` It could not be resolved by specifying a port `os.environ[\"MASTER_PORT\"] = \"9994\"`. Was it because ray assigns multiple ports at the same time and I only have 1 GPU? Thank you.\r\n\r\n```\r\n\r\nfrom ray.air import session\r\n\r\ndef train(config):\r\n # ...\r\n session.report({\"metric\": metric}, checkpoint=checkpoint)\r\n\r\nFor more information please see https://docs.ray.io/en/latest/tune/api/trainable.html\r\n\r\n warnings.warn(\r\n== Status ==\r\nCurrent time: 2023-07-05 16:13:18 (running for 00:00:00.64)\r\nUsing FIFO scheduling algorithm.\r\nLogical resource usage: 0/48 CPUs, 0/1 GPUs\r\nResult logdir: /root/ray_results/_objective_2023-07-05_16-13-18\r\nNumber of trials: 1/100 (1 PENDING)\r\n+---------------------+----------+-------+-----------------+---------------------+--------------------+------------------------+---------+----------------+----------------+\r\n| Trial name | status | loc | learning_rate | lr_scheduler_type | num_train_epochs | per_device_train_bat | seed | warmup_steps | weight_decay |\r\n| | | | | | | ch_size | | | |\r\n|---------------------+----------+-------+-----------------+---------------------+--------------------+------------------------+---------+----------------+----------------|\r\n| _objective_fd2e0c55 | PENDING | | 4.36572e-06 | polynomial | 1 |\r\n 12 | 59.5864 | 1832.57 | 0.0619684 |\r\n+---------------------+----------+-------+-----------------+---------------------+--------------------+------------------------+---------+----------------+----------------+\r\n\r\n\r\n(pid=1172302) [2023-07-05 16:13:25,088] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect)\r\n(_objective pid=1172302) /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/loompy/bus_file.py:67: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.\r\n(_objective pid=1172302) @jit\r\n(_objective pid=1172302) /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/loompy/bus_file.py:84: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.\r\n(_objective pid=1172302) @jit\r\n(_objective pid=1172302) /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/loompy/bus_file.py:101: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.\r\n(_objective pid=1172302) @jit\r\n2023-07-05 16:13:34,406 ERROR tune_controller.py:873 -- Trial task failed for trial _objective_fd2e0c55\r\nTraceback (most recent call last):\r\n File \"/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/ray/air/execution/_internal/event_manager.py\", line 110, in resolve_future\r\n result = ray.get(future)\r\n ^^^^^^^^^^^^^^^\r\n File \"/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/ray/_private/auto_init_hook.py\", line 18, in auto_init_wrapper\r\n return fn(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/ray/_private/client_mode_hook.py\", line 103, in wrapper\r\n return func(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/ray/_private/worker.py\", line 2540, in get\r\n raise value.as_instanceof_cause()\r\nray.exceptions.RayTaskError(RuntimeError): ray::ImplicitFunc.train() (pid=1172302, ip=172.31.110.212, actor_id=ffa19b5f202ac72158b2946001000000, repr=_objective)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/ray/tune/trainable/trainable.py\", line 389, in train\r\n raise skipped from exception_cause(skipped)\r\n File \"/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/ray/tune/trainable/function_trainable.py\", line 336, in entrypoint\r\n return self._trainable_func(\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/ray/tune/trainable/function_trainable.py\", line 653, in _trainable_func\r\n output = fn()\r\n ^^^^\r\n File \"/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/transformers/integrations.py\", line 357, in dynamic_modules_import_trainable\r\n return trainable(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/ray/tune/trainable/util.py\", line 324, in inner\r\n return trainable(config, **fn_kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/transformers/integrations.py\", line 258, in _objective\r\n local_trainer.train(resume_from_checkpoint=checkpoint, trial=trial)\r\n File \"/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/transformers/trainer.py\", line 1614, in train self._hp_search_setup(trial)\r\n File \"/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/transformers/trainer.py\", line 1330, in _hp_search_setup\r\n self.create_accelerator_and_postprocess()\r\n File \"/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/transformers/trainer.py\", line 3968, in create_accelerator_and_postprocess\r\n self.accelerator = Accelerator(\r\n ^^^^^^^^^^^^\r\n File \"/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/accelerate/accelerator.py\", line 345, in __init__\r\n self.state = AcceleratorState(\r\n ^^^^^^^^^^^^^^^^^\r\n File \"/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/accelerate/state.py\", line 680, in __init__\r\n PartialState(cpu, **kwargs)\r\n File \"/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/accelerate/state.py\", line 191, in __init__\r\n torch.distributed.init_process_group(backend=self.backend, **kwargs)\r\n File \"/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py\", line 900, in init_process_group\r\n store, rank, world_size = next(rendezvous_iterator)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/torch/distributed/rendezvous.py\", line 245, in _env_rendezvous_handler\r\n store = _create_c10d_store(master_addr, master_port, rank, world_size, timeout)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/torch/distributed/rendezvous.py\", line 176, in _create_c10d_store\r\n return TCPStore(\r\n ^^^^^^^^^\r\nRuntimeError: The server socket has failed to listen on any local network address. The server socket has failed to bind to [::]:29500 (errno: 98 - Address already in use). The server socket has failed to bind to DESKTOP-6FHRRIO:29500 (errno: 98 - Address already in use).\r\nResult for _objective_fd2e0c55:\r\n date: 2023-07-05_16-13-25\r\n hostname: DESKTOP-6FHRRIO\r\n node_ip: 172.31.110.212\r\n pid: 1172302\r\n timestamp: 1688544805\r\n trial_id: fd2e0c55\r\n\r\n(_objective pid=1172302) [W socket.cpp:426] [c10d] The server socket has failed to bind to [::]:29500 (errno: 98 - Address already in use).\r\n(_objective pid=1172302) [W socket.cpp:426] [c10d] The server socket has failed to bind to DESKTOP-6FHRRIO:29500 (errno: 98 - Address already in use).\r\n(_objective pid=1172302) [E socket.cpp:462] [c10d] The server socket has failed to listen on any local network address.\r\n(pid=1172435) [2023-07-05 16:13:41,310] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect)\r\n(_objective pid=1172435) /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/loompy/bus_file.py:67: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.\r\n(_objective pid=1172435) @jit\r\n(_objective pid=1172435) /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/loompy/bus_file.py:84: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.\r\n(_objective pid=1172435) @jit\r\n(_objective pid=1172435) /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/loompy/bus_file.py:101: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.\r\n(_objective pid=1172435) @jit\r\n^C[2023-07-05 16:13:47,161] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 1168420\r\n*\r\n```",
"@pchiang5 Perhaps the process has not been fully shut down, please use `ps -aux` to find the remaining process, then use `kill -9 process_pid` to kill them totally.",
"@2033329616 Thank you for your feedback. Yes, it shall be due to an open process not shut down and could be resolved by randomly assigning a new port before the previous call. \r\n\r\nHowever, I found the incompatibility of ray tune + hyperopt with deepspeed launcher is the main issue: Without ray tune + hyperopt, it ran with successful CPU offload. With deepspeed and ray+hyperopt as below, the zero3 offload did not work because the amount of VRAM consumption was identical to that without deepspeed. \r\n\r\n```\r\n\r\n# create the trainer\r\ntrainer = Trainer(\r\n model_init=model_init,\r\n args=training_args_init,\r\n data_collator=DataCollatorForCellClassification(),\r\n train_dataset=organ_trainset,\r\n eval_dataset=organ_evalset,\r\n compute_metrics=compute_metrics,\r\n callbacks = [EarlyStoppingCallback(early_stopping_patience=3)]\r\n)\r\n\r\n# specify raytune hyperparameter search space\r\nray_config = {\r\n \"num_train_epochs\": tune.choice([epochs]),\r\n \"learning_rate\": tune.loguniform(1e-6, 1e-3),\r\n \"weight_decay\": tune.uniform(0.0, 0.3),\r\n \"lr_scheduler_type\": tune.choice([\"linear\",\"cosine\",\"polynomial\"]),\r\n \"warmup_steps\": tune.uniform(100, 2000),\r\n \"seed\": tune.uniform(0,100),\r\n \"per_device_train_batch_size\": tune.choice([geneformer_batch_size])\r\n}\r\n\r\nhyperopt_search = HyperOptSearch(\r\n metric=\"eval_macro_f1\", mode=\"max\")\r\n\r\nearly_stop = {\r\n \"training_iteration\": 10\r\n}\r\n\r\n# optimize hyperparameters\r\ntrainer.hyperparameter_search(\r\n direction=\"maximize\",\r\n backend=\"ray\",\r\n resources_per_trial={\"cpu\":18,\"gpu\":1},\r\n hp_space=lambda _: ray_config,\r\n stop=early_stop,\r\n search_alg=hyperopt_search,\r\n n_trials=100, # number of trials\r\n progress_reporter=tune.CLIReporter(max_report_frequency=600,\r\n sort_by_metric=True,\r\n max_progress_rows=100,\r\n mode=\"max\",\r\n metric=\"eval_macro_f1\",\r\n metric_columns=[\"loss\", \"eval_loss\", \"eval_accuracy\", \"eval_macro_f1\"])\r\n```",
"Hi @pacman100,\r\n\r\n> You can't run DeepSpeed in a Jupyter notebook. You need to convert the notebook to a script and run the script via a distributed launcher similar to the translation example that you are running.\r\n\r\nI am also having a CUDA OOM error running DeepSpeed in a notebook on a single node with a single GPU (training [Segformer](https://huggingface.co/docs/transformers/model_doc/segformer) on a GPU with 8 GB RAM). I expected it to work, given [the deployment excerpt](https://huggingface.co/docs/transformers/main_classes/deepspeed#deployment-in-notebooks) from the docs. Why would you say so?\r\n\r\nI injected the env variables as reported in the docs using stage 3 with CPU offloading, but still the error remains.\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @pacman100. Any news?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hello @DiTo97, please open a new issue with a minimal reproducer example (self-contained runnable example) for us to deep dive.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"> Hello @DiTo97, please open a new issue with a minimal reproducer example (self-contained runnable example) for us to deep dive.\r\n\r\nHi @pacman100,\r\n\r\nSorry for the late reply. I can give you a MRE of the notebook and deepspeed config, as well as the detailed specs of the machine, but cannot share the dataset, being private.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,698 | 1,698 |
NONE
| null |
### System Info
WSL2
- `transformers` version: 4.30.2
- Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.11.4
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@pacman100
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. the example notebook `https://huggingface.co/ctheodoris/Geneformer/blob/main/examples/cell_classification.ipynb`
2. modify here `training_args_init = TrainingArguments(**training_args, deepspeed ='ds_config_zero3.json')`
3. the example dataset `https://huggingface.co/datasets/ctheodoris/Genecorpus-30M/tree/main/example_input_files/cell_classification/cell_type_annotation/cell_type_train_data.dataset`
ds_config_zero3.json:
This json worked well with transformers deepspeed test `deepspeed examples/pytorch/translation/run_translation.py --deepspeed tests/deepspeed/ds_config_zero3.json --model_name_or_path t5-small --output_dir output_dir --do_eval --max_eval_samples 50 --warmup_steps 50 --max_source_length 128 --val_max_target_length 128 --overwrite_output_dir --per_device_eval_batch_size 4 --predict_with_generate --dataset_config "ro-en" --fp16 --source_lang en --target_lang ro --dataset_name wmt16 --source_prefix "translate English to Romanian: "`
I also confirmed the CPU offload with `https://github.com/huggingface/transformers-bloom-inference` (the transfer from VRAM to CPU RAM)
```
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e8,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e8,
"stage3_max_reuse_distance": 1e8,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
message and error:
```
DESKTOP-6FHRRIO:1110179:1110179 [0] NCCL INFO Bootstrap : Using eth0:172.31.110.212<0>
DESKTOP-6FHRRIO:1110179:1110179 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
DESKTOP-6FHRRIO:1110179:1110179 [0] misc/cudawrap.cc:90 NCCL WARN Failed to find CUDA library in (null) (NCCL_CUDA_PATH=(null))
NCCL version 2.14.3+cuda11.7
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Failed to open libibverbs.so[.1]
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO NET/Socket : Using [0]eth0:172.31.110.212<0>
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Using network Socket
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 00/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 01/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 02/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 03/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 04/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 05/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 06/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 07/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 08/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 09/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 10/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 11/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 12/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 13/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 14/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 15/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 16/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 17/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 18/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 19/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 20/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 21/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 22/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 23/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 24/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 25/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 26/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 27/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 28/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 29/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 30/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 31/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Trees [0] -1/-1/-1->0->-1 [1] -1/-1/-1->0->-1 [2] -1/-1/-1->0->-1 [3] -1/-1/-1->0->-1 [4] -1/-1/-1->0->-1 [5] -1/-1/-1->0->-1 [6] -1/-1/-1->0->-1 [7] -1/-1/-1->0->-1 [8] -1/-1/-1->0->-1 [9] -1/-1/-1->0->-1 [10] -1/-1/-1->0->-1 [11] -1/-1/-1->0->-1 [12] -1/-1/-1->0->-1 [13] -1/-1/-1->0->-1 [14] -1/-1/-1->0->-1 [15] -1/-1/-1->0->-1 [16] -1/-1/-1->0->-1 [17] -1/-1/-1->0->-1 [18] -1/-1/-1->0->-1 [19] -1/-1/-1->0->-1 [20] -1/-1/-1->0->-1 [21] -1/-1/-1->0->-1 [22] -1/-1/-1->0->-1 [23] -1/-1/-1->0->-1 [24] -1/-1/-1->0->-1 [25] -1/-1/-1->0->-1 [26] -1/-1/-1->0->-1 [27] -1/-1/-1->0->-1 [28] -1/-1/-1->0->-1 [29] -1/-1/-1->0->-1 [30] -1/-1/-1->0->-1 [31] -1/-1/-1->0->-1
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Connected all rings
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Connected all trees
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO 32 coll channels, 32 p2p channels, 32 p2p channels per peer
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO comm 0x560d4b0d62b0 rank 0 nranks 1 cudaDev 0 busId 3000 - Init COMPLETE
#################come on tensor([0., 0., 0., ..., 0., 0., 0.])
#################come on tensor([0., 0., 0., ..., 0., 0., 0.])
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[9], line 1
----> 1 trainer.train()
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/transformers/trainer.py:1645, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1640 self.model_wrapped = self.model
1642 inner_training_loop = find_executable_batch_size(
1643 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size
1644 )
-> 1645 return inner_training_loop(
1646 args=args,
1647 resume_from_checkpoint=resume_from_checkpoint,
1648 trial=trial,
1649 ignore_keys_for_eval=ignore_keys_for_eval,
1650 )
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/transformers/trainer.py:1759, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1756 model, self.optimizer = self.accelerator.prepare(self.model, self.optimizer)
1757 else:
1758 # to handle cases wherein we pass "DummyScheduler" such as when it is specified in DeepSpeed config.
-> 1759 model, self.optimizer, self.lr_scheduler = self.accelerator.prepare(
1760 self.model, self.optimizer, self.lr_scheduler
1761 )
1763 if self.is_fsdp_enabled:
1764 self.model = model
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/accelerate/accelerator.py:1178, in Accelerator.prepare(self, device_placement, *args)
1176 args = self._prepare_ipex(*args)
1177 if self.distributed_type == DistributedType.DEEPSPEED:
-> 1178 result = self._prepare_deepspeed(*args)
1179 elif self.distributed_type == DistributedType.MEGATRON_LM:
1180 result = self._prepare_megatron_lm(*args)
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/accelerate/accelerator.py:1505, in Accelerator._prepare_deepspeed(self, *args)
1502 if type(scheduler).__name__ in deepspeed.runtime.lr_schedules.VALID_LR_SCHEDULES:
1503 kwargs["lr_scheduler"] = scheduler
-> 1505 engine, optimizer, _, lr_scheduler = deepspeed.initialize(**kwargs)
1506 if optimizer is not None:
1507 optimizer = DeepSpeedOptimizerWrapper(optimizer)
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/deepspeed/__init__.py:165, in initialize(args, model, optimizer, model_parameters, training_data, lr_scheduler, mpu, dist_init_required, collate_fn, config, config_params)
153 engine = DeepSpeedHybridEngine(args=args,
154 model=model,
155 optimizer=optimizer,
(...)
162 config=config,
163 config_class=config_class)
164 else:
--> 165 engine = DeepSpeedEngine(args=args,
166 model=model,
167 optimizer=optimizer,
168 model_parameters=model_parameters,
169 training_data=training_data,
170 lr_scheduler=lr_scheduler,
171 mpu=mpu,
172 dist_init_required=dist_init_required,
173 collate_fn=collate_fn,
174 config=config,
175 config_class=config_class)
176 else:
177 assert mpu is None, "mpu must be None with pipeline parallelism"
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/deepspeed/runtime/engine.py:309, in DeepSpeedEngine.__init__(self, args, model, optimizer, model_parameters, training_data, lr_scheduler, mpu, dist_init_required, collate_fn, config, config_class, dont_change_device)
306 model_parameters = list(model_parameters)
308 if has_optimizer:
--> 309 self._configure_optimizer(optimizer, model_parameters)
310 self._configure_lr_scheduler(lr_scheduler)
311 self._report_progress(0)
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/deepspeed/runtime/engine.py:1184, in DeepSpeedEngine._configure_optimizer(self, client_optimizer, model_parameters)
1181 optimizer_wrapper = self._do_optimizer_sanity_check(basic_optimizer)
1183 if optimizer_wrapper == ZERO_OPTIMIZATION:
-> 1184 self.optimizer = self._configure_zero_optimizer(basic_optimizer)
1185 elif optimizer_wrapper == AMP:
1186 amp_params = self.amp_params()
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/deepspeed/runtime/engine.py:1474, in DeepSpeedEngine._configure_zero_optimizer(self, optimizer)
1472 log_dist(f'Creating {model_dtype} ZeRO stage {zero_stage} optimizer', ranks=[0])
1473 from deepspeed.runtime.zero.stage3 import DeepSpeedZeroOptimizer_Stage3
-> 1474 optimizer = DeepSpeedZeroOptimizer_Stage3(
1475 self.module,
1476 optimizer,
1477 timers=timers,
1478 ds_config=self.config,
1479 static_loss_scale=self.loss_scale(),
1480 dynamic_loss_scale=self.dynamic_loss_scale(),
1481 dynamic_loss_args=self.dynamic_loss_scale_args(),
1482 clip_grad=self.gradient_clipping(),
1483 contiguous_gradients=self.zero_contiguous_gradients(),
1484 reduce_bucket_size=self.zero_reduce_bucket_size(),
1485 prefetch_bucket_size=self.zero_prefetch_bucket_size(),
1486 max_reuse_distance=self.zero_max_reuse_distance(),
1487 max_live_parameters=self.zero_max_live_parameters(),
1488 param_persistence_threshold=self.zero_param_persistence_threshold(),
1489 model_persistence_threshold=self.zero_model_persistence_threshold(),
1490 dp_process_group=self.data_parallel_group,
1491 reduce_scatter=self.zero_reduce_scatter(),
1492 overlap_comm=self.zero_overlap_comm(),
1493 offload_optimizer_config=self.zero_offload_optimizer(),
1494 offload_param_config=self.zero_offload_param(),
1495 sub_group_size=self.zero_sub_group_size(),
1496 mpu=self.mpu,
1497 postscale_gradients=self.postscale_gradients(),
1498 gradient_predivide_factor=self.gradient_predivide_factor(),
1499 gradient_accumulation_steps=self.gradient_accumulation_steps(),
1500 aio_config=self.aio_config(),
1501 communication_data_type=self.communication_data_type)
1503 else:
1504 raise NotImplementedError("ZeRO stage {} not implemented".format(zero_stage))
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/deepspeed/runtime/zero/stage3.py:149, in DeepSpeedZeroOptimizer_Stage3.__init__(self, module, init_optimizer, timers, ds_config, static_loss_scale, dynamic_loss_scale, dynamic_loss_args, verbose, contiguous_gradients, reduce_bucket_size, prefetch_bucket_size, max_reuse_distance, max_live_parameters, param_persistence_threshold, model_persistence_threshold, dp_process_group, reduce_scatter, overlap_comm, offload_optimizer_config, offload_param_config, sub_group_size, mpu, clip_grad, communication_data_type, postscale_gradients, gradient_predivide_factor, gradient_accumulation_steps, elastic_checkpoint, aio_config)
146 self.params_in_nvme_and_cpu = False
147 self.max_params_in_cpu = 0
--> 149 self.parameter_offload = self.initialize_ds_offload(module=module,
150 timers=timers,
151 ds_config=ds_config,
152 overlap_comm=overlap_comm,
153 prefetch_bucket_size=prefetch_bucket_size,
154 max_reuse_distance=max_reuse_distance,
155 max_live_parameters=max_live_parameters,
156 param_persistence_threshold=param_persistence_threshold,
157 model_persistence_threshold=model_persistence_threshold,
158 offload_param_config=offload_param_config,
159 mpu=mpu)
161 self.persistent_parameters = self.parameter_offload.persistent_parameters
162 self._configure_offloading(offload_optimizer_config, offload_param_config)
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/deepspeed/runtime/zero/stage3.py:352, in DeepSpeedZeroOptimizer_Stage3.initialize_ds_offload(self, module, timers, ds_config, overlap_comm, prefetch_bucket_size, max_reuse_distance, max_live_parameters, param_persistence_threshold, model_persistence_threshold, offload_param_config, mpu)
338 def initialize_ds_offload(
339 self,
340 module,
(...)
350 mpu,
351 ):
--> 352 return DeepSpeedZeRoOffload(module=module,
353 timers=timers,
354 ds_config=ds_config,
355 overlap_comm=overlap_comm,
356 prefetch_bucket_size=prefetch_bucket_size,
357 max_reuse_distance=max_reuse_distance,
358 max_live_parameters=max_live_parameters,
359 param_persistence_threshold=param_persistence_threshold,
360 model_persistence_threshold=model_persistence_threshold,
361 offload_param_config=offload_param_config,
362 mpu=mpu)
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/deepspeed/runtime/zero/parameter_offload.py:229, in DeepSpeedZeRoOffload.__init__(self, module, timers, ds_config, overlap_comm, prefetch_bucket_size, max_reuse_distance, max_live_parameters, param_persistence_threshold, model_persistence_threshold, offload_param_config, mpu)
226 self.offload_device = offload_param_config.device
227 self.offload_param_pin_memory = offload_param_config.pin_memory
--> 229 self._convert_to_zero_parameters(ds_config, module, mpu)
231 for m in module.modules():
232 _init_external_params(m)
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/deepspeed/runtime/zero/parameter_offload.py:297, in DeepSpeedZeRoOffload._convert_to_zero_parameters(self, ds_config, module, mpu)
294 if mpu:
295 group = mpu.get_data_parallel_group()
--> 297 Init(module=module,
298 data_parallel_group=group,
299 dtype=self.dtype,
300 config_dict_or_path=ds_config,
301 remote_device=self.offload_device,
302 pin_memory=self.offload_param_pin_memory,
303 mpu=mpu)
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/deepspeed/runtime/zero/partition_parameters.py:782, in Init.__init__(self, module, data_parallel_group, mem_efficient_linear, remote_device, pin_memory, config_dict_or_path, config, enabled, dtype, mpu)
780 if module is not None:
781 assert isinstance(module, torch.nn.Module)
--> 782 self._convert_to_zero_parameters(module.parameters(recurse=True))
784 self.use_all_gather_into_tensor = dist.has_all_gather_into_tensor()
785 if not self.use_all_gather_into_tensor:
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/deepspeed/runtime/zero/partition_parameters.py:798, in Init._convert_to_zero_parameters(self, param_list)
796 continue
797 self._convert_to_deepspeed_param(param)
--> 798 param.partition()
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/deepspeed/runtime/zero/partition_parameters.py:966, in Init._convert_to_deepspeed_param.<locals>.partition(param_list, hierarchy, has_been_updated)
964 if param_list is None:
965 param_list = [cls]
--> 966 self._partition(param_list, has_been_updated=has_been_updated)
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/deepspeed/runtime/zero/partition_parameters.py:1104, in Init._partition(self, param_list, force, has_been_updated)
1100 def _partition(self, param_list, force=False, has_been_updated=False):
1101 for param in param_list:
1102 #print_rank_0(f"Before Partitioning Param {param.ds_id}")
1103 # self._param_status(param)
-> 1104 self._partition_param(param, has_been_updated=has_been_updated)
1105 param.ds_status = ZeroParamStatus.NOT_AVAILABLE
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/deepspeed/utils/nvtx.py:15, in instrument_w_nvtx.<locals>.wrapped_fn(*args, **kwargs)
13 def wrapped_fn(*args, **kwargs):
14 get_accelerator().range_push(func.__qualname__)
---> 15 ret_val = func(*args, **kwargs)
16 get_accelerator().range_pop()
17 return ret_val
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/deepspeed/runtime/zero/partition_parameters.py:1186, in Init._partition_param(self, param, buffer, has_been_updated)
1183 if start < param.ds_numel and end <= param.ds_numel:
1184 src_tensor = one_dim_param.narrow(0, start, partition_size)
-> 1186 param.ds_tensor.copy_(src_tensor)
1187 #partitioned_tensor = src_tensor.clone().detach().to(self.remote_device)
1188
1189 else:
1190 # partitioned_tensor = torch.zeros(partition_size,
1191 # dtype=param.dtype,
1192 # device=self.remote_device )
1194 if start < param.ds_numel:
RuntimeError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
I
### Expected behavior
I expect the CPU offload to work so I can use a larger batch number (current 2 working without `deepspeed ='ds_config_zero3.json'`). However, with DeepSpeed, even batch 1 did not work with the same error.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24658/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24658/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24657
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24657/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24657/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24657/events
|
https://github.com/huggingface/transformers/issues/24657
| 1,788,785,439 |
I_kwDOCUB6oc5qnrMf
| 24,657 |
At least one model's inference seems to have broken from transformers 4.29.2 -> 4.30.*
|
{
"login": "Disastorm",
"id": 1088694,
"node_id": "MDQ6VXNlcjEwODg2OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1088694?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Disastorm",
"html_url": "https://github.com/Disastorm",
"followers_url": "https://api.github.com/users/Disastorm/followers",
"following_url": "https://api.github.com/users/Disastorm/following{/other_user}",
"gists_url": "https://api.github.com/users/Disastorm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Disastorm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Disastorm/subscriptions",
"organizations_url": "https://api.github.com/users/Disastorm/orgs",
"repos_url": "https://api.github.com/users/Disastorm/repos",
"events_url": "https://api.github.com/users/Disastorm/events{/privacy}",
"received_events_url": "https://api.github.com/users/Disastorm/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Thanks for reporting. I confirm the issue could be reproduced.",
"cc @Narsil @ArthurZucker ",
"I went forward to check which commit causing issue. It turns out to be\r\n\r\n096f2cf12664bb7da41f89897d3a22966baee9b4\r\n\r\nTied weights load (#24310)\r\n\r\nWe will have to wait @sgugger to take a look.\r\n\r\n(I can probably did what he has done for another model)",
"The fix will involve pushing a new model file to the Hub repo.\r\n\r\nIf you need to use this model/pipeline and can't stay with version `4.29`, I can help creating the new model file.\r\n",
"cool, technically i can stay on 4.29 but its also nice to be able to do 4-bit inference by updating to 4.30.* . Maybe I can post on the model owners twitter to see if he wants to update his models.",
"The necessary step (required for some model after #24310) to update (some) model weights is only known by a few team member. I am not sure the repo. owner knows how to do it. I can open a Hub repo. PR however.",
"Ok I posted on his twitter and linked this thread. No idea if hes going to respond though.\r\n\r\nHe seems to have the most popular japanese/english translation models on huggingface, ja-en and en-ja looks like they got 9k-10k downloads in the past month, so I guess it would be good if they can be updated for the newest transformers. \r\n\r\nSo this is not a temporary issue? Basically any models affected will need to update, otherwise all future versions of transformers won't work with them? Can you not make the tie weights thing a parameter or something, or does that actually break other stuff?",
"I created branch for a temporary fix. You can use it as \r\n\r\n```bash\r\npip uninstall transformers\r\npip install git+https://github.com/huggingface/transformers@temp_fix_marian#egg=transformers\r\n```\r\n\r\nwith a slightly modified script\r\n\r\n```python\r\ndef preprocess_state_dict_fn(state_dict):\r\n state_dict[\"lm_head.weight\"] = state_dict[\"model.encoder.embed_tokens.weight\"]\r\n return state_dict\r\n\r\nmodel_kwargs = {\"preprocess_state_dict_fn\": preprocess_state_dict_fn}\r\n\r\nimport pysbd\r\nseg_en = pysbd.Segmenter(language=\"en\", clean=False)\r\n\r\nfrom transformers import pipeline\r\nfugu_translator = pipeline('translation', model='staka/fugumt-en-ja', model_kwargs=model_kwargs)\r\ntxt = 'This is a cat. It is very cute.'\r\nresult = fugu_translator(seg_en.segment(txt))\r\nprint(result)\r\nfinal = ''\r\nfor s in result:\r\n final += s['translation_text']\r\nwith open('./tmp.txt', \"w\", encoding=\"utf-8\") as f:\r\n f.write(final)\r\n```",
"Please note this is not through a discussion with the team, and it's not clear yet how we will deal with this issue officially.\r\n\r\nLet me know if the (temp) fix works.",
"ok I see. Thanks, the temp fix worked.",
"I was pinged, but I'm not sure why. Is this because this is related to weights tying ?\r\n\r\nAnything I can do to help ?",
"@Narsil I don't know the technical details but the situation is that I found some models that were broken by presumably the weight tying, apparently this was also known as being related to marian or something like that.\r\nydshieh provided me with a workaround patch in huggingface that fixes the issue, but he doesn't know if thats going to make it into official releases.\r\nThe other alternative is that the affected model owners need to update their model weights.\r\n\r\n\r\n\r\nAlso, The creator of the model responded on twitter saying he wanted to fix the model so I think he might come to this issue. I'm not sure to what extent he knows english though.",
"@Narsil It's because this issue is shown with `pipeline` (that's why Amy pin you in the first place), but the root cause is the tie weights in `from_pretrained`.\r\n\r\nNo, you don't need to be involved :-)",
"The creator responded on twitter and said he'll try to fix the model: https://twitter.com/voleneko/status/1677545104037539841\r\n\r\nby the way, as for the general issue of incompatibilities between versions, do you guys know if this is also the reason why tortoise-tts doesn't seem to work after 4.30.* also, or is that a separate issue?",
"@Disastorm \r\n\r\nCould you provide a link to `tortoise-tts` (It is a HF hub repo. right?)\r\n\r\nSo far we only see this issue on (a few ) marian model (checkpoints). But it might affect a few other model classes. ",
"I've used tortoise's own library, but inside the library they reference the huggingface repo https://huggingface.co/jbetker/tortoise-tts-v2 .\r\n\r\nSo I don't know if its an issue with their library or not, but it does break starting from 4.30 but works on 4.29.2 also.\r\n\r\nHere is an issue from their github: https://github.com/neonbjb/tortoise-tts/issues/472\r\nsome kind of state dictionary errors related to gpt2 or something.\r\n\r\nHere is their issue where they commit the solution ( forcing transformers==4.29.2 ): https://github.com/neonbjb/tortoise-tts/pull/508\r\n",
"Hi @Disastorm \r\n\r\nIt would be super great if you can take a look of what is the model used being not working anymore in 4.30 🙏 ",
"I really dont know that much about this stuff, but from what I can tell, the tortoise library uses the .pth models here ( I'm not really sure what .pth models represent ): https://huggingface.co/jbetker/tortoise-tts-v2/tree/main/.models\r\n\r\nThe specific file that has the above error is the autoregressive.pth.\r\nThe .pth file is being loaded by a custom torch.nn.module called UnifiedVoice in the tortoise repo.\r\nThis module inside of it has a huggingface GPT2Model inside of it that is initialized here https://github.com/neonbjb/tortoise-tts/blob/82724cca5427ddf1570256e616d56b0ebb93e668/tortoise/models/autoregressive.py#L231C45-L231C45\r\n\r\nI don't know how the torch.nn.modules work but I believe in the end what may be happening is that this UnifiedVoice module is using the GPT2Model to \"load_state_dict\" on the autoregressive.pth file and thats where the difference between transformers 4.29.2 and 4.30.* is.",
"Tortoise is broken for 4.31.0 as well.\r\nhttps://github.com/rsxdalv/tts-generation-webui/issues/106\r\nhttps://github.com/neonbjb/tortoise-tts/issues/480",
"@rsxdalv \r\n\r\nIf you can **translate** the issue in `tortoise` to a code snippet that only involves `transformers` stuff, we are more than happy to take a look and help. We don't really know how `tortoise` things work, like `UnifiedVoice `, `autoregressive.pth` and what's the checkpoint being used.",
"> @rsxdalv\r\n> \r\n> If you can **translate** the issue in `tortoise` to a code snippet that only involves `transformers` stuff, we are more than happy to take a look and help. We don't really know how `tortoise` things work, like `UnifiedVoice `, `autoregressive.pth` and what's the checkpoint being used.\r\n\r\n@sanchit-gandhi Just wanted to ask if perhaps you know the answer to this before I dig into it.",
"Not off the top of my head - would need a reproducible code snippet that only uses `transformers` as @ydshieh has requested! Shall we open a new issue for this since it's different from the original model in question? (just to help track any issues/solutions)",
"> Not off the top of my head - would need a reproducible code snippet that only uses `transformers` as @ydshieh has requested! Shall we open a new issue for this since it's different from the original model in question? (just to help track any issues/solutions)\r\n\r\nThanks for tuning in, sure!",
"> I really dont know that much about this stuff, but from what I can tell, the tortoise library uses the .pth models here ( I'm not really sure what .pth models represent ): https://huggingface.co/jbetker/tortoise-tts-v2/tree/main/.models\r\n> \r\n> The specific file that has the above error is the autoregressive.pth. The .pth file is being loaded by a custom torch.nn.module called UnifiedVoice in the tortoise repo. This module inside of it has a huggingface GPT2Model inside of it that is initialized here https://github.com/neonbjb/tortoise-tts/blob/82724cca5427ddf1570256e616d56b0ebb93e668/tortoise/models/autoregressive.py#L231C45-L231C45\r\n> \r\n> I don't know how the torch.nn.modules work but I believe in the end what may be happening is that this UnifiedVoice module is using the GPT2Model to \"load_state_dict\" on the autoregressive.pth file and thats where the difference between transformers 4.29.2 and 4.30.* is.\r\n\r\nYes, that's it. You can read the issue I made for some more details on it, but basically GPT2Model removed certain parameters (which tortoise didn't use): h.0.attn.bias and h.0.attn.masked_bias\r\nBut it seems that your model was more sensitive to them disappearing.",
"Read the issue - thanks for documenting! Yes it's quite likely this was the culprit -> these keys were un-used and were thus removed from the state dict. Whilst this didn't break the HF load / save methods, it might have changed loading a state dict using `torch.load`\r\n\r\nHere's a quick fix you can try in your model code to keep using the latest `transformers` version:\r\n\r\n```python\r\nfrom transformers import GPT2Config, GPT2PreTrainedModel, LogitsProcessorList\r\n...\r\n\r\nclass GPT2InferenceModel(GPT2PreTrainedModel):\r\n _keys_to_ignore_on_load_unexpected = [r\"h\\.\\d+\\.attn\\.bias\", r\"h\\.\\d+\\.attn\\.masked_bias\"]\r\n _keys_to_ignore_on_load_missing = [r\"attn.masked_bias\", r\"h\\.\\d+\\.attn\\.masked_bias\", r\"h\\.\\d+\\.attn\\.bias\"]\r\n...\r\n```\r\n=> this should restore the behaviour you had previously when saving / loading torch state dicts",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"just in case anyone is wondering, the models mentioned in the original post have been fixed by their creator as of a month ago (at least thats what his commit says, havn't tried it myself)\r\nhttps://huggingface.co/staka/fugumt-en-ja\r\nhttps://huggingface.co/staka/fugumt-ja-en",
"So actually his fix did in fact work in 4.30.2, but it seems since then it has broken again. Is there yet another Marian issue that appeared after 4.30.* again? I guess maybe these? https://github.com/huggingface/transformers/issues/26216 https://github.com/huggingface/transformers/issues/26271",
"Do you have a code snippet to demonstrate that this is a different error to those in the aforementioned issues? Happy to take a look into this @Disastorm!",
"From the 2 linked issue, it looks like Arthur is aware of this and need some Hub changes being merged."
] | 1,688 | 1,703 | 1,694 |
NONE
| null |
### System Info
- `transformers` version: 4.30.2
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.10.9
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: default setting ( I think it uses GPU )
- Using distributed or parallel set-up in script?: not sure what this is, but I think its N/A
### Who can help?
inference of the model [staka/fugumt-en-ja](https://huggingface.co/staka/fugumt-en-ja) using the "translation" pipeline has broken from 4.30.0 and above.
I don't know if this is expected, or if there are some new parameters I need to use, but using the default script from the readme no longer works. It results in gibberish. I have also confirmed that it works fine in 4.29.2.
I don't know what other models are affected.
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I have a slightly modified script to write the output to a txt file, since my windows commandline doesn't support Japanese, but I don't think that is relevant. Otherwise, the code is the same from the official readme of the model. Here is the code itself:
```
import pysbd
seg_en = pysbd.Segmenter(language="en", clean=False)
from transformers import pipeline
fugu_translator = pipeline('translation', model='staka/fugumt-en-ja')
txt = 'This is a cat. It is very cute.'
result = fugu_translator(seg_en.segment(txt))
print(result)
final = ''
for s in result:
final += s['translation_text']
with open('./tmp.txt', "w", encoding="utf-8") as f:
f.write(final)
```
in transformers 4.29.2 result is correct:
`これは猫です。とても可愛いです。`
in transformers 4.30.0 and above, result is gibberish:
`が必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となりますが必要となります伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる伝わる`
### Expected behavior
in transformers 4.29.2 result is correct:
`"これは猫です。とても可愛いです。"`
I expect the same behavior in 4.30.* and above.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24657/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24657/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24656
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24656/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24656/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24656/events
|
https://github.com/huggingface/transformers/issues/24656
| 1,788,699,817 |
I_kwDOCUB6oc5qnWSp
| 24,656 |
discontinuity learning rate while resume from checkpoint
|
{
"login": "jiangix-paper",
"id": 62198809,
"node_id": "MDQ6VXNlcjYyMTk4ODA5",
"avatar_url": "https://avatars.githubusercontent.com/u/62198809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiangix-paper",
"html_url": "https://github.com/jiangix-paper",
"followers_url": "https://api.github.com/users/jiangix-paper/followers",
"following_url": "https://api.github.com/users/jiangix-paper/following{/other_user}",
"gists_url": "https://api.github.com/users/jiangix-paper/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiangix-paper/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiangix-paper/subscriptions",
"organizations_url": "https://api.github.com/users/jiangix-paper/orgs",
"repos_url": "https://api.github.com/users/jiangix-paper/repos",
"events_url": "https://api.github.com/users/jiangix-paper/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiangix-paper/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @jiangix-paper, thanks for raising this issue. \r\n\r\nWithout a code snippet that we can use to reproduce the issue on our end, more information about the running environment e.g. deepspeed version, hardward (run `transformers-cli env` in the terminal and copy-paste the output) and more details about what's observed (specific numbers / outputs) it's not possible for us to help you. ",
"@amyeroberts Sorry for incomplete details. My deepspeed config file is as follows:\r\n\r\nThe deepspeed version is 0.9.0\r\nRun \"transformers-cli env\", the output are as follows:\r\n\r\n\r\nMy training arguments are as follows:\r\n\r\n\r\nFirst, I run the following code to get a deepspeed saved model:\r\n\r\nThe saved model files are as follows:\r\n\r\n\r\n\r\nThe loss are as follows:\r\n\r\n\r\nBut when i resume from the saved checkpoint using trainer.train(resume_from_checkpoint=\"xxx\"), I expected the learning rate continue from the step 10 (1.4999e-05) and the loss should continue from that(10.4141). But I found the learning rate is from scratch.\r\n\r\n\r\n\r\nFinally, I load the \"zero_pp_rank_0_mp_rank_00_model_states.pt\" in checkpoint 10. I found the lr_scheduler is None. Although I do not define the lr_scheduler in deepspeed config file, I define it in training arguments. Why is lr_scheduler not be saved?\r\n\r\nThanks a lot. If it lack the other details, please contact me.",
"Can you help me please. Thanks a lot. @ydshieh ",
"@jiangix-paper I am not familiar with deepspeed. But I can tag someone in the team.\r\n\r\nHowever, please don't upload screenshot as code snippet. Use text format (and in a good formatting too) so we can copy paste.\r\nOtherwise, consider using a cola notebook.",
"Sorry for that. I will paste my code in text format.\r\nMy deepspeed config is :\r\n```\r\n{\r\n \"bf16\": {\r\n \"enabled\": \"auto\"\r\n },\r\n \"zero_optimization\": {\r\n \"stage\": 3,\r\n \"overlap_comm\": true,\r\n \"contiguous_gradients\": true,\r\n \"sub_group_size\": 1e9,\r\n \"reduce_bucket_size\": \"auto\",\r\n \"stage3_prefetch_bucket_size\": \"auto\",\r\n \"stage3_param_persistence_threshold\": \"auto\",\r\n \"stage3_max_live_parameters\": 1e9,\r\n \"stage3_max_reuse_distance\": 1e9,\r\n \"stage3_gather_16bit_weights_on_model_save\": true\r\n },\r\n \"gradient_accumulation_steps\": \"auto\",\r\n \"gradient_clipping\": \"auto\",\r\n \"steps_per_print\": 1,\r\n \"train_batch_size\": \"auto\",\r\n \"train_micro_batch_size_per_gpu\": \"auto\",\r\n \"wall_clock_breakdown\": false\r\n}\r\n```\r\n\r\nThe training args are:\r\n```\r\nrun_cmd=\"torchrun --master_addr localhost --nnodes 1 --nproc_per_node 8 --master_port 9001 \\\r\n pretrain.py \\\r\n --deepspeed ${deepspeed_config_file} \\\r\n --config_name ${llama_path} \\\r\n --tokenizer_name_or_path ${llama_path} \\\r\n --validation_split_percentage 0.000001 \\\r\n --per_device_train_batch_size 4 \\\r\n --per_device_eval_batch_size 4 \\\r\n --do_train \\\r\n --seed 2023 \\\r\n --num_train_epochs 1 \\\r\n --lr_scheduler_type cosine \\\r\n --learning_rate 0.00015 \\\r\n --max_grad_norm 1.0 \\\r\n --weight_decay 0.1 \\\r\n --warmup_ratio 0.01 \\\r\n --logging_strategy steps \\\r\n --logging_steps 1 \\\r\n --save_strategy steps \\\r\n --save_total_limit 100 \\\r\n --save_steps 1000 \\\r\n --bf16 True \\\r\n --tf32 True \\\r\n --optim adamw_apex_fused \\\r\n --adam_beta1 0.9 \\\r\n --adam_beta2 0.95 \\\r\n --report_to tensorboard \\\r\n --evaluation_strategy no \\\r\n --gradient_accumulation_steps 1 \\\r\n --preprocessing_num_workers 100 \\\r\n --block_size 2048 \\\r\n --output_dir ${output_dir} \\\r\n --overwrite_output_dir \\\r\n --ddp_timeout 360000 \\\r\n --logging_first_step True \\\r\n --torch_dtype bfloat16 \\\r\n --gradient_checkpointing True \\\r\n --ddp_find_unused_parameters False\"\r\n```\r\n\r\nThe pretrain.py code is:\r\n```\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=train_dataset if training_args.do_train else None,\r\n eval_dataset=eval_dataset if training_args.do_eval else None,\r\n tokenizer=tokenizer,\r\n data_collator=fault_tolerance_data_collator,\r\n compute_metrics=compute_metrics if training_args.do_eval and not is_torch_tpu_available() else None,\r\n preprocess_logits_for_metrics=preprocess_logits_for_metrics\r\n if training_args.do_eval and not is_torch_tpu_available()\r\n else None,\r\n )\r\n rank0_print('Start Training')\r\n if training_args.do_train:\r\n checkpoint = None\r\n if training_args.resume_from_checkpoint is not None:\r\n checkpoint = training_args.resume_from_checkpoint\r\n elif last_checkpoint is not None:\r\n checkpoint = last_checkpoint\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n trainer.save_model()\r\n trainer.save_state()\r\n```\r\nCan you help me to tag someone in your team? @ydshieh Thanks a lot",
"@jiangix-paper Thank you for updating.\r\n\r\n- `pretrain.py` is not self-complete. Please including the necessary import statements and all the variable definitions that are used\r\n- `${llama_path}` is missing: please specify it.\r\n- datasets seem to be missing",
"But looking at\r\n\r\n```\r\n if training_args.resume_from_checkpoint is not None:\r\n checkpoint = training_args.resume_from_checkpoint\r\n elif last_checkpoint is not None:\r\n checkpoint = last_checkpoint\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n```\r\nHave you verified that `checkpoint` passed to `trainer.train` has the desired value?",
"> But looking at\r\n> \r\n> ```\r\n> if training_args.resume_from_checkpoint is not None:\r\n> checkpoint = training_args.resume_from_checkpoint\r\n> elif last_checkpoint is not None:\r\n> checkpoint = last_checkpoint\r\n> train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n> ```\r\n> \r\n> Have you verified that `checkpoint` passed to `trainer.train` has the desired value?\r\n\r\nI have checked the checkpoint, and I find the lr_scheduler in checkpoint is None. But I specified lr_scheduler_type in the parameter settings as 'cosine'。I do not know why it is not saved.",
"Nice! Would you like to fill more missing info. so we can take a look 🙏 .\r\nProbably this issue is not even with DeepSpeed (?)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"This is a bug of huggingface. When using deepspeed, we can use hf to create lr_scheluler by passing `lr_scheduler_type` to training_args or specify `scheduler` in ds_config. Under the first condion, when resuming from the checkpoint, hf will skip the hf lr_schelduler pipeplie but call deepspeed to restore lr_scheluler. However, the lr_scheluler is not saved into the weights as deepspeed does not know where the hf lr_scheluler is. \r\nThe newest version of huggingface now fixed it.\r\n\r\ncheck this: \r\nload scheduler from resuming checkpoint:\r\nhttps://github.com/huggingface/transformers/blob/5936c8c57ccb2bda3b3f28856a7ef992c5c9f451/src/transformers/trainer.py#L1750\r\nthen:\r\nhttps://github.com/huggingface/transformers/blob/5936c8c57ccb2bda3b3f28856a7ef992c5c9f451/src/transformers/trainer.py#L2503-L2514\r\n\r\nIn the old version (4.32.1):\r\n\r\nThe loading is skipped...\r\n\r\n\r\n",
"Sadly, up to now, the latest version 4.33.2 breaks it again. See my issue raised here: https://github.com/huggingface/transformers/issues/26384 "
] | 1,688 | 1,695 | 1,691 |
NONE
| null |
### System Info
transformers 4.30.2
pytorch 2.0.1
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I use deepspeed stage 3 and huggingface trainer to resume from my past checkpoint(finish running step 1000). My warm_up steps is 2000. My total training epoch is 1. But when I resume from my past checkpoint, the learning rate is started from scratch。 I except it is started from the learning rate in step 1000.
### Expected behavior
Thanks
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24656/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24656/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24655
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24655/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24655/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24655/events
|
https://github.com/huggingface/transformers/issues/24655
| 1,788,683,084 |
I_kwDOCUB6oc5qnSNM
| 24,655 |
Add a mechanism to transform the forward pass on Flax models
|
{
"login": "davisyoshida",
"id": 1377776,
"node_id": "MDQ6VXNlcjEzNzc3NzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1377776?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davisyoshida",
"html_url": "https://github.com/davisyoshida",
"followers_url": "https://api.github.com/users/davisyoshida/followers",
"following_url": "https://api.github.com/users/davisyoshida/following{/other_user}",
"gists_url": "https://api.github.com/users/davisyoshida/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davisyoshida/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davisyoshida/subscriptions",
"organizations_url": "https://api.github.com/users/davisyoshida/orgs",
"repos_url": "https://api.github.com/users/davisyoshida/repos",
"events_url": "https://api.github.com/users/davisyoshida/events{/privacy}",
"received_events_url": "https://api.github.com/users/davisyoshida/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @gante @sanchit-gandhi ",
"Hey @davisyoshida - that's a great point, would you want to add a composable transformation on top of the `transformers` Flax model (a standard Python class object), or the Flax nn.Module?\r\n\r\nIf it's the latter (which I believe is the more typical use case), you can first extract the Flax module from the Flax model:\r\n```python\r\nmodel = FlaxGPT2ForCausalLM.from_pretrained(\"gpt2\") # standard python object\r\nmodule = model.module # flax nn module\r\n```\r\n\r\nAnd then apply any composable transformations to this module (it behaves in the same way as a pure Flax module).\r\n\r\nNote that the signature of the module is not the same as the model - this is something that will be addressed by #22499 / https://github.com/huggingface/transformers/pull/22866\r\n\r\nYou can read more about the `transformers` Flax design philosophy here: https://github.com/huggingface/transformers/tree/main/examples/research_projects/jax-projects#flax-models-in-transformers",
"The issue I'm getting at is that extracting the module like that makes you lose access to all the utilities on the model class. Here's an example:\r\n\r\n```python\r\nimport jax \r\nimport jax.numpy as jnp \r\nfrom transformers import AutoTokenizer, FlaxAutoModelForCausalLM \r\n \r\ndef wrap_model(model): \r\n module = model.module \r\n def inner(*args, **kwargs): \r\n return module.apply(*args, **kwargs) \r\n model.module = inner \r\n \r\nmodel_name = 'gpt2' \r\nmodel, params = FlaxAutoModelForCausalLM.from_pretrained(model_name, _do_init=False) \r\ntokenizer = AutoTokenizer.from_pretrained(model_name) \r\n \r\nparams = jax.device_put(params, jax.devices('gpu')[0]) \r\n \r\ninputs = jnp.asarray(tokenizer.encode('Hello there.'))[None] \r\noutputs = model.generate(inputs, params=params) \r\nprint(tokenizer.decode(outputs.sequences[0])) \r\n \r\n# Already not very JAX-y since wrap_model mutates `model` \r\nwrap_model(model)\r\n# Crashes with error: AttributeError: can't set attribute 'module'\r\n\r\noutputs = model.generate(inputs, params=params) \r\nprint(tokenizer.decode(outputs.sequences[0])) \r\n \r\n# Ideal pure API: \r\n# my_wrapped_model = wrap_the_model(model) \r\n# my_wrapped_model.generate(inputs, params=params)\r\n```\r\n\r\nIf you just extract the module, is there still some way to use generate?\r\n",
"Sorry! `model.module` is a `property`, which just returns `model._module`:\r\nhttps://github.com/huggingface/transformers/blob/495729427045c7a58e040fa9bf6df81c16f54208/src/transformers/modeling_flax_utils.py#L255-L257\r\n\r\nYou should be able modify `model._module` to access the module class!\r\n\r\n> If you just extract the module, is there still some way to use generate?\r\n\r\nThe generate method is tied to the `FlaxPreTrainedModel`, i.e. the Python class `model`: https://github.com/huggingface/transformers/blob/495729427045c7a58e040fa9bf6df81c16f54208/src/transformers/modeling_flax_utils.py#L158\r\n\r\nSo I don't think there's any way to generate with just the module that you extract from the model. What you could try is changing the module itself to **also** inherit from `FlaxGenerationMixin`, such that we can call `module.generate`. Note that we'll also have to implement methods like `prepare_inputs_for_generation`:\r\nhttps://github.com/huggingface/transformers/blob/495729427045c7a58e040fa9bf6df81c16f54208/src/transformers/models/gpt2/modeling_flax_gpt2.py#L745\r\nAnd `update_inputs_for_generation` for this to work:\r\nhttps://github.com/huggingface/transformers/blob/495729427045c7a58e040fa9bf6df81c16f54208/src/transformers/models/gpt2/modeling_flax_gpt2.py#L766\r\nAlthough I'm not sure whether this is possible since Flax modules are just data classes, so you'll have to experiment and see.\r\n\r\nI think the easiest would be to define your `wrap_model` function such that it extracts the `_module`, then applies the composition as required, and finally sets the `model._module` attribute again (despite being not super JAX-y, I think this is the easiest way)",
"> and finally sets the model._module attribute again \r\n\r\nAh right I remember running into this when I tried to make generation from quantized models work. The problem is that it's expected that `module` be a proper Flax module, not just a function. Assigning to `_module` in my code above leads to this:\r\n\r\n```python\r\ntransformers/models/gpt2/modeling_flax_gpt2.py\", line 451, in init_cache\r\n init_variables = self.module.init(\r\nAttributeError: 'function' object has no attribute 'init'\r\n```\r\n\r\nYou might think to try wrapping `_module`'s `__call__` method like this:\r\n\r\n```python\r\ndef wrap_model(model): \r\n call_fn = model.module.__call__ \r\n def inner(*args, **kwargs): \r\n return call_fn(*args, **kwargs) \r\n \r\n model._module.__call__ = inner\r\n```\r\n\r\nBut using this (again in the original code I posted), gives the following:\r\n```python\r\n File \"/home/davis/data/venvs/jax/lib/python3.10/site-packages/transformers/models/gpt2/modeling_flax_gpt2.py\", line 749, in prepare_inputs_for_generation \r\n past_key_values = self.init_cache(batch_size, max_length) \r\n File \"/home/davis/data/venvs/jax/lib/python3.10/site-packages/transformers/models/gpt2/modeling_flax_gpt2.py\", line 451, in init_cache \r\n init_variables = self.module.init( \r\nline 8, in inner \r\n return call_fn(*args, **kwargs) \r\n File \"/home/davis/data/venvs/jax/lib/python3.10/site-packages/transformers/models/gpt2/modeling_flax_gpt2.py\", line 703, in __call__ \r\n outputs = self.transformer( \r\nAttributeError: \"FlaxGPT2LMHeadModule\" object has no attribute \"transformer\". If \"transformer\" is defined in '.setup()', remember these fields are only accessible from inside 'init' or 'apply'. \r\n```\r\n\r\nSo none of these methods seem to work. I think this should definitely be possible without needing to re-implement methods like `prepare_inputs_for_generation` which are already implemented on the model class.",
"Okay I think I figured out the right thing to do, you have to wrap the module's `apply()` method. I think that requires enough indirection that maybe providing a utility for it would be helpful, and it shouldn't be too hard to do.",
"Would you like to contribute such a utility @davisyoshida? Or update the docstrings with a note on how this could be done? Would be a nice addition to make it easier to build on top of `transformers` for JAX/Flax models ",
"Is overwriting `apply()` on the module actually what you guys would like to do as the recommended solution? I think something a bit cleaner might be adding indirection in between the module and places where the model calls it. That way custom behavior could be inserted without needing to modify the Flax object. I'm not sure exactly what the best way to do that would be though.",
"What was the kind of utility you had in mind? Not sure I fully follow from your previous comment how this would look other than wrapping the `apply`? https://github.com/huggingface/transformers/issues/24655#issuecomment-1626189281\r\n\r\nPerhaps we could go through one or two proposed solutions and discuss them here before proceeding with a PR? Would be great to discuss a bit how this would look before jumping into new code",
"Yeah so the simplest option is just something like:\r\n\r\n```python\r\ndef wrap_apply(model, wrapper):\r\n model.module.apply = wrapper(model.module.apply)\r\n```\r\n\r\nThe downside is that it has side effects (although this is probably hard to avoid with the non-functional API HF went with), but more importantly you can't get the original behavior back (maybe you just want to apply the transformation for evaluation then get back to training).\r\n\r\nAnother option would be something like:\r\n```python\r\n# On the model:\r\ndef set_apply_wrapper(self, wrapper=None):\r\n self._apply_wrapper = wrapper\r\n \r\n@property\r\ndef module(self):\r\n # this proxy should wrap self._module and but call\r\n # wrapper(self._module.apply) whenever \r\n return some_proxy_object \r\n```\r\nThis way if you want to restore the model to its original state you can just set the wrapper to `None`.\r\n\r\nI think a more ambitious option which is (IMO) more in line with JAX's philosophy, would be to factor the generation utilities out into pure functions which accept whatever arguments they need (e.g. a callable which maps (params, *args, past_cache) -> logits, and one which initializes the cache), then relegate the mixins to just calling those external functions appropriately.\r\n\r\nThis would let people use the generation utilities much more flexibly.\r\n",
"Thanks for the clear explanation - happy to proceed with a PR for design 2 if that works for you short-term? We can then assess how much additional benefit we'd get from a full JAX generate re-factor, since this would be a rather large undertaking as you've outlined.",
"Sounds good, I'm willing to put something together. It might be a month or two since I'm pretty slammed atm.",
"Perfect! We can also run it by the Flax authors since they're interested in having Transformers' Flax models work more seamlessly with the JAX/Flax libraries ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Are you still interested in working on this @davisyoshida? Will leave closed for now, but feel free to re-open and/or open a PR if you're keen!",
"@sanchit-gandhi It's on my list but not sure when I can get to it.",
"Awesome thanks for the update - I think this is the first request we've had for this feature, so we can leave this thread here until you get back to it and see if anyone in the community is interested and would like to take a stab in that timeframe."
] | 1,688 | 1,694 | 1,693 |
NONE
| null |
### Feature request
There should be some way to apply function transformations to Flax models, while not losing the ability to use things like generation utilities.
### Motivation
JAX's main idea is "composable transformations", but currently there's no good way to apply transformations to Flax models. Currently, to apply `my_cool_transformation` to a model, one needs to do something like:
```python
@my_cool_transformation
def wrapper(params, *args, **kwargs):
return model(*args, params=params, **kwargs)
```
This works fine for training loops and so on, but there doesn't seem to be a way to do this and still be able to use `.generate()`. The reason this would be beneficial is that one can implement things like quantization and LoRA as function transformations, so it would be cool to not lose generation support when doing so.
### Your contribution
I'd be willing to make a PR, but I think this would probably require some modification to the HuggingFace base classes for Flax models.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24655/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24655/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24654
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24654/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24654/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24654/events
|
https://github.com/huggingface/transformers/pull/24654
| 1,788,610,696 |
PR_kwDOCUB6oc5Up4R5
| 24,654 |
add CFG for .generate()
|
{
"login": "Vermeille",
"id": 1108219,
"node_id": "MDQ6VXNlcjExMDgyMTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1108219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vermeille",
"html_url": "https://github.com/Vermeille",
"followers_url": "https://api.github.com/users/Vermeille/followers",
"following_url": "https://api.github.com/users/Vermeille/following{/other_user}",
"gists_url": "https://api.github.com/users/Vermeille/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vermeille/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vermeille/subscriptions",
"organizations_url": "https://api.github.com/users/Vermeille/orgs",
"repos_url": "https://api.github.com/users/Vermeille/repos",
"events_url": "https://api.github.com/users/Vermeille/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vermeille/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@Vermeille -- @sanchit-gandhi raises good points about the attention mask and taking the first item of the batch in the unconditional logits. As it stands, it will only work with batch size = 1, and our logits processors should be flexible wrt batch size :) ",
"good catch with the batch size! As for the attention mask, could you guide me to a working solution with that? I'm quite unfamiliar with huggingface tbh.",
"Tests are on the way.",
"All right. we only need to address use_cache / attention_mask.\r\n\r\n* use_cache: currently, the forward passes take care of automatically appending to the negative prompt. I don't think such a thing happens with use_cache=False so I gotta do the concat myself. probably meaning I have to make two branches based on the value of use_cache?\r\n* attention_mask: Does it even make sense then to read out.logits[:, -1]? is -1 a valid index if that position has an attention_mask of 0 due to padding? If so, then I will concat a valid id to padding and the attention_mask will be something like [1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1], won't that screw up positional encoding with the \"empty\" slots?\r\n\r\nBasically, I think .generate() had to answer the same questions so you guys will be able to answer them quite easily. Also I will need your guidance as for the API design to integrate this seamlessly.",
"@gante I think we're good. The failure looks totally unrelated.",
"Indeed. no more failed test.",
"@Vermeille \r\n\r\n> * use_cache: currently, the forward passes take care of automatically appending to the negative prompt. I don't think such a thing happens with use_cache=False so I gotta do the concat myself. probably meaning I have to make two branches based on the value of use_cache?\r\n\r\nAs I've replied in the dedicated thread, don't worry about the uncached case :) Make sure an exception is thrown, though!\r\nEDIT: I see that you've handled the uncached case. In that case, since you've already written the code, you can leave it be :)\r\n\r\n> * attention_mask: Does it even make sense then to read out.logits[:, -1]? is -1 a valid index if that position has an attention_mask of 0 due to padding? If so, then I will concat a valid id to padding and the attention_mask will be something like [1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1], won't that screw up positional encoding with the \"empty\" slots?\r\n\r\nThat's a non-issue: `.generate()` must always be used with left-padding, so you won't run into the case of picking a padded token with `-1` indexing 🙌 \r\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"Following @MPKonst remarks:\r\n- the CFG Rescale \"technique\" has been removed as it is just a different parameterization of the guidance scale\r\n- The final log_softmax has been removed, which corresponds to how the calculations were performed for the benchmarks anyway. It was introduced as later stage for the GPT4All experiments, as part of the CFG Rescale integration, and it seems it was not a good idea. ",
"That's weird, the tests have been written for a while but did not show up in the PR. They do now.\r\nI also addressed your latest comments.",
"I answered some comments but will finish the PR next week. I'm unavailable until then. Sorry for the delay.",
"@gante I need you to answer about the `model_kwargs` validation before I can submit a new version of the PR",
"That would be an amazing feature. Thanks for working on this @Vermeille \r\nFingers crossed it will get reviewed and accepted soon",
"+1 for this PR. I hope that it can be merged soon.",
"@Vermeille answered in the thread! \r\n\r\nLMK if there is any other decision I can help with -- and tag me when you think the PR is in a finalized state, for a quick check and approval ✅ ",
"@gante looks like we're good now :)",
"(@sgugger this one possibly did not get through your notifications, gently pinging :) )",
"@Vermeille would you be able to retouch the tests? We can merge right after that change :)",
"I'm currently in vacations. What's the problem with the tests?",
"@Vermeille there are a few patterns in the tests that we usually avoid in our codebase (like thin wrappers). However, it's a minor issue, and this feature is being requested by the community, so I'm favoring merging it now.\r\n\r\nI understand the review process is long and somewhat tedious on your end. We err on the strict side, as we bear the cost of future maintenance. Thank you for collaborating with us, and looking forward to future contributions 🤗\r\n\r\nNext steps: we will be communicating the feature on our end. Amplification and/or communication on your end will help bring awareness to the feature! 🔥 ",
"Thanks for your contribution @Vermeille and congrats on the PR!"
] | 1,688 | 1,691 | 1,691 |
CONTRIBUTOR
| null |
This commit [implements CFG](https://github.com/huggingface/transformers/issues/24536)
Fixes #24536 (I did not touch MusicGen)
Hope you enjoy it!
@sanchit-gandhi
@gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24654/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24654/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24654",
"html_url": "https://github.com/huggingface/transformers/pull/24654",
"diff_url": "https://github.com/huggingface/transformers/pull/24654.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24654.patch",
"merged_at": 1691349325000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24653
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24653/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24653/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24653/events
|
https://github.com/huggingface/transformers/pull/24653
| 1,788,278,122 |
PR_kwDOCUB6oc5Uov5b
| 24,653 |
Llama/GPTNeoX: add RoPE scaling
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"(Of course, tests are missing. Proper validation of whether the feature is working as expected is also missing. I'll add them if we decide to move forward with this feature!)",
"_The documentation is not available anymore as the PR was closed or merged._",
"Having this in transformers would be excellent!\r\n\r\n I've uploaded a bunch of fp16 and GPTQ repos to HF using @jquesnelle 's [trust_remote_code Llama modelling patch](https://huggingface.co/emozilla/open_llama_7b-scaled/blob/main/modelling_llama.py) that implements RoPE using @kaiokendev's method, and I know there are quite a number of people using those already, and I've had a few requests to put out more. And even more are using RoPE outside of transformers via the ExLlama GPTQ implementation.\r\n\r\nSo there's a great deal of appetite for this feature amongst users, understandably.",
"Could this also be applied to [GPT-J models](https://github.com/huggingface/transformers/blob/main/src/transformers/models/gptj/modeling_gptj.py#L76)? ",
"@versae yes, it can :) The code needs to be modified there as well, but the concept can be applied to any model with rotary position embeddings",
"Thank you for your work! Just letting you know that I've improved the NTK-aware method in this PR. https://github.com/jquesnelle/scaled-rope/pull/1 It decreases non-finetuned PPL even further (preliminary testing shows 4.9 -> 4.6 PPL at 8192 context size) and theoretically will significantly improve a finetune's convergence/stability compared to previous NTK-aware method. \r\n\r\nAlso because the alpha hyperparameter was difficult to use when predicting effective context size (alpha=4 was something close to ~6400 context size instead of 8192), that problem was fixed and it is now changed to a \"scale\" factor, which can be used the same way to the \"scale\" in linear RoPE scaling. (eg. for LLaMA scale=2 is 4096 and scale=4 is 8192)\r\n\r\nI hope this improved method might be also considered one day as it is one more step towards extending context size for all LLMs! 🚀",
"Hey @bloc97 @jquesnelle 👋 \r\n\r\nLooking at your recent PR ([this one](https://github.com/jquesnelle/scaled-rope/pull/1)) -- am I right in saying that\r\n1. There is no way to parameterize the new class such that it is equivalent to the original NTK-aware scaling?\r\n2. @bloc97's PR and @jquesnelle's dynamic implementation are slightly different, in the sense that @bloc97's targets a specific length (but can extrapolate) and @jquesnelle's dynamically adjusts to the maximum observed length? \r\n3. Because @jquesnelle's implementation `base` may suddenly change due to a longer sequence, it is less friendly to fine-tune?\r\n\r\nI'm trying to determine how to integrate and document the goodies, while keeping the diff size manageable 🤗 ",
"The technique also seems to work out-of-the-box with GPTNeoX models 🔥 With the latest [commit](https://github.com/huggingface/transformers/pull/24653/commits/d7e763628dc0b4189402059bea2dd71b828ac18e), running the script below \r\n\r\n```py\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\nimport torch\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"EleutherAI/pythia-1.4b-deduped\")\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n \"EleutherAI/pythia-1.4b-deduped\",\r\n torch_dtype=torch.bfloat16,\r\n device_map=\"auto\",\r\n rope_scaling={\"type\": \"dynamic\", \"factor\": 2.0},\r\n)\r\n\r\nprompt = ... # see PR header for the prompt, >5k tokens\r\nquestion = \"Question: What is the paper about?\"\r\n\r\ninputs = tokenizer(prompt + question, return_tensors=\"pt\").to(\"cuda\")\r\n\r\nprint(inputs.input_ids.shape)\r\ngen_out = model.generate(**inputs, max_new_tokens=50)\r\nprint(tokenizer.batch_decode(gen_out)[0])\r\n```\r\n\r\ngets us\r\n\r\n```\r\nQuestion: What is the paper about?\r\n\r\n3.6 CONCLUSION\r\n\r\nWe have shown that Position Interpolation can extend the context window of pre-trained models to substantially longer context windows. We have\r\ndemonstrated that the models can be effectively extended to longer context windows, and\r\n```\r\n\r\nWithout the `rope_scaling` argument, we get\r\n\r\n```\r\nQuestion: What is the paper about? The. The.\r\n. The.\r\n.\r\n.\r\n.\r\n.\r\n4.\r\n. The. The. The. The. The. The. The.al.s. The. The.... a. The.\r\n```\r\n\r\nThis is particularly amazing, since we're talking about a 1.4B model 👀 \r\n\r\n(cc @bloc97 @jquesnelle, you may be interested in this finding)",
"@amyeroberts @sgugger I'd like to request the review of you two, since this is an unusual post hoc modeling change PR.\r\n\r\nA few key points:\r\n1. The technique also works well with `gptneox`, see [this comment](https://github.com/huggingface/transformers/pull/24653/#issuecomment-1632954413) for a cool example on a 1.4B model\r\n2. Adding the functionality to `gptneox` implied a minor modeling change -- the causal mask was limited to the original maximum sequence size, but there is no reason for that limitation. It's just a triangular matrix with ones.\r\n3. Decided NOT to implement on `gptneox-japanese` and `esm`, the two other models with rotary embeddings. I'm not sure if their usage justifies the implementation cost (it takes some time to validate everything is working correctly, as there are variations in the expected usage), so I'd suggest letting demand speak for itself :)\r\n4. RoPE scaling is parameterized by a `dict`, and not a `dataclass`. A `dataclass` would be better, as @sgugger suggested, but it complicates (de)serialization, needing extra code. I'd like to first work on the config file base class I've mentioned on slack, if you're okay with it -- it would make the new `dataclass` a ~50 line change, as opposed to a >200 one!\r\n5. There are new scaling strategies in the works, as mentioned in the comments above, so we can quickly add them in follow up PRs if their results are superior. As it stands, we can already hack `llama` and `gptneox` beyond their original maximum length without fine-tuning 🔥 ",
"(For brevity, I'll refer to the new NTK-By-Parts method as NTKv2)\r\n\r\nNTKv2 is an improved version of NTK. We found that NTK did not perform well when fine-tuned; the reason for this was that the resulting embedding matrix still contained some extrapolated out-of-population values that model had not seen during training. Dynamic NTK hid this by continually scaling `base` so that you never actually got to this part of the embedding values. \r\n\r\nNTKv2 is parameterized by `scale`, which has the same meaning as linear interpolation, e.g. you set it to `4` to target `8K` context length. We've found that this method, when fine-tuned, beats fine-tuned linear interpolation, which is to say it gives even better results than the recent [Meta](https://arxiv.org/abs//2306.15595) paper.\r\n\r\nIn the repository there is also now a Dynamic NTKv2, which is the same idea as the previous dynamic method, i.e. scale the hyperparamter relative to the ratio between the current context length and the model's original trained context length, while using the original embedding values when under the native pre-trained length. This also beats Dynamic NTK in the no-fine-tuning scenario.\r\n\r\n\r\n\r\nIn the above graph, [LLongMA](https://huggingface.co/conceptofmind) are the fine-tuned OpenLLaMA models we've released, trained on 1B extra tokens (v2 still in the process of training)\r\n\r\n> 1. There is no way to parameterize the new class such that it is equivalent to the original NTK-aware scaling?\r\n\r\nUnfortunately no. I understand these different methods can get unwieldly quickly, but NTKv2 appears to be strictly better than original NTK -- I would potentially just advocate replacing the original NTK with this, but that could also be done in a follow-up PR too; the results that this gives you is already Very Good (TM).\r\n\r\nFWIW the LLongMA models use the exact modeling code here to maintain compatibility without needing `trust_remote_code` if/when this PR gets merged 🙂 ",
"> Hey @bloc97 @jquesnelle 👋\r\n> \r\n> Looking at your recent PR ([this one](https://github.com/jquesnelle/scaled-rope/pull/1)) -- am I right in saying that\r\n> \r\n> 1. There is no way to parameterize the new class such that it is equivalent to the original NTK-aware scaling?\r\n> 2. @bloc97's PR and @jquesnelle's dynamic implementation are slightly different, in the sense that @bloc97's targets a specific length (but can extrapolate) and @jquesnelle's dynamically adjusts to the maximum observed length?\r\n> 3. Because @jquesnelle's implementation `base` may suddenly change due to a longer sequence, it is less friendly to fine-tune?\r\n> \r\n> I'm trying to determine how to integrate and document the goodies, while keeping the diff size manageable 🤗\r\n\r\n 1. Unfortunately \"NTK v1\" was just not good for fine-tuning unless alpha is set correctly, so I think going forward people should strictly use \"v2\" for fine-tuning, and consider v1 to be only for inference. However it is possible for me to parameterize the \"v2\" class so that you can make it equivalent to original NTK scaling, but it will take additional effort that is probably best used elsewhere. There are only few \"NTK v1\" finetunes are out there.\r\n 2. For points 2 and 3, finetuning with Dynamic method will need additional consideration in the code on the training side, because training happens on all the tokens at once, dynamic implemented as is (for inference) will probably not be applied correctly. We are still working on the theoretical side of potentially training a dynamic model.",
"@bloc97 @jquesnelle thank you for your input -- and excited to hear about the performance of NTK-By-Parts!\r\n\r\nBased on your comments, I will:\r\n1 - Delete the `ntk` approach, as NTK-By-Parts is superior;\r\n2 - Merge what I have now -- we are going to have a release early next week, so this would already be included in `v4.31`;\r\n3 - Open a follow-up PR with NTK-By-Parts 🤗 Or, if you're interested in contributing with the technique, we'd highly appreciate it! Just let me know over the next days.\r\n\r\n⚠️ Note -- the latest commits have changed the structure of the modeling code from overloading the existing RoPE class to inheriting from the original implementation, so we don't risk ending up with a Frankenstein class as we add more strategies. The parameterization stayed nearly the same, so you probably only need to make minor adjustments to the model config files to load without `trust_remote_code`! (changed from `{\"name\": scaling type, \"factor\": scaling factor}` to {\"type\": scaling type, \"factor\": scaling factor}, as `name` is often attributed to an instance name in `transformers`)",
"Hi, I'm very glad to see that transformers supports RoPE scaling! Experiments show low ppl on long input sequences.\r\n\r\nBut in the current implementation, would there be a mismatch in generation? Here are my thoughts.\r\n\r\nSince the `seq_len` increases during the generation, the base is scaled in every generation step with different scaling factor. Since the history key_states are store in the kv_cache , they are not scaled with the new base. The scaling only affects the state of the current token.\r\n\r\nFor example, if the input sequence is of length 2048, after generating the first token, the new input length is 2049, and we scale the base with `seq_len=2049`. After generating the second token, the new input length is 2050, and we scale the base with `seq_len=2050`. But during the generation, the kv_cache is used and thus the key_states before position 2049 are not scaled according to the new length.\r\n\r\nShould all the key_states be scaled with the same base? Would it be a problem?\r\n\r\n",
"> Since the `seq_len` increases during the generation, the base is scaled in every generation step with different scaling factor. Since the history key_states are store in the kv_cache , they are not scaled with the new base. The scaling only affects the state of the current token.\r\n\r\nNote that this only happens in the dynamic method, not static scaling.\r\nThe RoPE embeddings are merged with the q_proj and k_proj (only k_proj is cached after the merge to be reused later), but interestingly, even if the k_proj is cached (thus not using the dynamic scaled RoPE embeddings correctly) the model works without problems. We are currently investigating the reason behind this, but the obvious main implication is that the q_proj is more important for RoPE than k_proj.\r\nBut yes, the correct way would be to cache k_proj before applying the RoPE embeddings, so the dynamic embeddings can be applied correctly each time the scale changes.\r\n",
"\r\n> Note that this only happens in the dynamic method, not static scaling. The RoPE embeddings are merged with the q_proj and k_proj (only k_proj is cached after the merge to be reused later), but interestingly, even if the k_proj is cached (thus not using the dynamic scaled RoPE embeddings correctly) the model works without problems. We are currently investigating the reason behind this, but the obvious main implication is that the q_proj is more important for RoPE than k_proj. But yes, the correct way would be to cache k_proj before applying the RoPE embeddings, so the dynamic embeddings can be applied correctly each time the scale changes.\r\n\r\nThank you for your comment.\r\nWe have also observed that there is no significant difference in whether key_states are stored before or after applying RoPE. However, I think more experiments is necessary to test this.\r\n\r\nI implement storing KV_cache before apply RoPE. Anyone interest in the implementation can refer to this [code](https://github.com/ymcui/Chinese-LLaMA-Alpaca/pull/743).\r\n",
"> Hi, I'm very glad to see that transformers supports RoPE scaling! Experiments show low ppl on long input sequences.\r\n> \r\n> But in the current implementation, would there be a mismatch in generation? Here are my thoughts.\r\n> \r\n> Since the `seq_len` increases during the generation, the base is scaled in every generation step with different scaling factor. Since the history key_states are store in the kv_cache , they are not scaled with the new base. The scaling only affects the state of the current token.\r\n> \r\n> For example, if the input sequence is of length 2048, after generating the first token, the new input length is 2049, and we scale the base with `seq_len=2049`. After generating the second token, the new input length is 2050, and we scale the base with `seq_len=2050`. But during the generation, the kv_cache is used and thus the key_states before position 2049 are not scaled according to the new length.\r\n> \r\n> Should all the key_states be scaled with the same base? Would it be a problem?\r\n\r\nI have question similar to this. The graph showing dynamic scaling in this [reddit post](https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/) showing that the perplexity of the model with dynamic scaling are same with model without scaling until 2048 tokens length (Of course this must be because the base value did not change before 2048 tokens).\r\n\r\nThis got me thinking, If I first generate with long context (say 4096 tokens), the base value would change accordingly (which is around 35000). Then, if I next generate with short context like 1024 context, the `sin_cache` and `cos_cache` will not be reverted back when the base value still 10000 hence the perplexity is raised. Should there be changed to `forward` call especially for dynamic scaled embeddings?",
"> This got me thinking, If I first generate with long context (say 4096 tokens), the base value would change accordingly (which is around 35000). Then, if I next generate with short context like 1024 context, the `sin_cache` and `cos_cache` will not be reverted back when the base value still 10000 hence the perplexity is raised. Should there be changed to `forward` call especially for dynamic scaled embeddings?\r\n\r\nI have the same concern. In the dynamic scaling, the sin and os may should not be cached ",
"Hi\r\nI try to test ntk effect on my trained neox model. Using dynamic ntk(https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/). However, it is found that the ppl will oscillate. What is the reason for this?\r\n<img width=\"1571\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/21999339/88fdacc6-e986-4a66-8709-dd00742724ab\">\r\n\r\nhere is my test code. I modified from https://huggingface.co/docs/transformers/perplexity .\r\n```\r\nimport json\r\nfrom transformers import AutoModel, AutoTokenizer, AutoConfig\r\nimport torch\r\nfrom tqdm import tqdm\r\nimport traceback\r\n\r\ndevice = \"cpu\"\r\nif torch.cuda.is_available():\r\n device = \"cuda\"\r\n\r\nconfig = AutoConfig.from_pretrained(model_dir, trust_remote_code=True)\r\nconfig.rope_scaling = {\r\n \"type\": \"dynamic\",\r\n \"factor\": 2,\r\n}\r\n\r\nmodel = AutoModel.from_pretrained(model_dir, config=config, trust_remote_code=True, torch_dtype=torch.float16)\r\nmodel.eval()\r\nmodel = model.to(device)\r\ntokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)\r\n\r\nwith torch.inference_mode():\r\n kv = {}\r\n try:\r\n for value in tqdm(range(32, 12000, 32)):\r\n max_length = stride = value\r\n\r\n with open(\"gov_report_test.json\") as f:\r\n data = json.load(f)\r\n\r\n ppls = []\r\n for idx, line in enumerate(data):\r\n if idx >= 1:\r\n break\r\n encodings = tokenizer(line, return_tensors=\"pt\")\r\n seq_len = encodings.input_ids.size(1)\r\n\r\n nlls = []\r\n prev_end_loc = 0\r\n for begin_loc in range(0, seq_len, stride):\r\n end_loc = min(begin_loc + max_length, seq_len)\r\n trg_len = end_loc - prev_end_loc # may be different from stride on last loop\r\n input_ids = encodings.input_ids[:, begin_loc:end_loc].to(device)\r\n target_ids = input_ids.clone()\r\n target_ids[:, :-trg_len] = -100\r\n\r\n outputs = model(input_ids, labels=target_ids)\r\n\r\n # loss is calculated using CrossEntropyLoss which averages over valid labels\r\n # N.B. the model only calculates loss over trg_len - 1 labels, because it internally shifts the labels\r\n # to the left by 1.\r\n neg_log_likelihood = outputs.loss\r\n\r\n nlls.append(neg_log_likelihood)\r\n\r\n prev_end_loc = end_loc\r\n if end_loc == seq_len:\r\n break\r\n\r\n ppl = torch.exp(torch.stack(nlls).mean())\r\n ppls.append(ppl)\r\n\r\n total_ppl = torch.stack(ppls).mean()\r\n kv[value] = total_ppl.item()\r\n print(value, total_ppl.item())\r\n except Exception as e:\r\n print(e)\r\n print(value, seq_len)\r\n print(traceback.format_exc())\r\n```",
"@guozhiyao Nothing immediately comes to mind, it could be even a model \"feature\" (looking at the plot for the original model, which also has the periodicity). \r\n\r\nWould you be able to a) run the same script for LLaMA and b) repeat your experiment using the script @jquesnelle used ([this one](https://github.com/jquesnelle/scaled-rope/blob/master/eval/perplexity.py))? a) should rule out model-specific issues and b) should rule out code-specific issues.\r\n",
"> @guozhiyao Nothing immediately comes to mind, it could be even a model \"feature\" (looking at the plot for the original model, which also has the periodicity).\r\n> \r\n> Would you be able to a) run the same script for LLaMA and b) repeat your experiment using the script @jquesnelle used ([this one](https://github.com/jquesnelle/scaled-rope/blob/master/eval/perplexity.py))? a) should rule out model-specific issues and b) should rule out code-specific issues.\r\n\r\n@gante Thanks a lot. It is solved by using the code.",
"> > This got me thinking, If I first generate with long context (say 4096 tokens), the base value would change accordingly (which is around 35000). Then, if I next generate with short context like 1024 context, the `sin_cache` and `cos_cache` will not be reverted back when the base value still 10000 hence the perplexity is raised. Should there be changed to `forward` call especially for dynamic scaled embeddings?\r\n> \r\n> I have the same concern. In the dynamic scaling, the sin and os may should not be cached\r\n\r\n@airaria I had the same problem, not only `cos` and `sin`, `inv_freq` also don't cache. The `_set_cos_sin_cache` of `GPTNeoXDynamicNTKScalingRotaryEmbedding` can be changed to the following form, but the efficiency is not optimized.\r\n\r\n```\r\n def _set_cos_sin_cache(self, seq_len, device):\r\n self.max_seq_len_cached = 0\r\n\r\n base = self.base\r\n if seq_len > self.max_position_embeddings:\r\n base = self.base * (\r\n (self.scaling_factor * seq_len / self.max_position_embeddings) - (self.scaling_factor - 1)\r\n ) ** (self.dim / (self.dim - 2))\r\n\r\n inv_freq = 1.0 / (base ** (torch.arange(0, self.dim, 2).float().to(device) / self.dim))\r\n self.register_buffer(\"inv_freq\", inv_freq)\r\n\r\n t = torch.arange(seq_len, device=device, dtype=self.inv_freq.dtype)\r\n\r\n freqs = torch.einsum(\"i,j->ij\", t, self.inv_freq)\r\n # Different from paper, but it uses a different permutation in order to obtain the same calculation\r\n emb = torch.cat((freqs, freqs), dim=-1)\r\n self.cos_cached = emb.cos()[None, None, :, :]\r\n self.sin_cached = emb.sin()[None, None, :, :]\r\n```",
"> > > This got me thinking, If I first generate with long context (say 4096 tokens), the base value would change accordingly (which is around 35000). Then, if I next generate with short context like 1024 context, the `sin_cache` and `cos_cache` will not be reverted back when the base value still 10000 hence the perplexity is raised. Should there be changed to `forward` call especially for dynamic scaled embeddings?\r\n> > \r\n> > \r\n> > I have the same concern. In the dynamic scaling, the sin and os may should not be cached\r\n> \r\n> @airaria I had the same problem, not only `cos` and `sin`, `inv_freq` also don't cache. The `_set_cos_sin_cache` of `GPTNeoXDynamicNTKScalingRotaryEmbedding` can be changed to the following form, but the efficiency is not optimized.\r\n> \r\n> ```\r\n> def _set_cos_sin_cache(self, seq_len, device):\r\n> self.max_seq_len_cached = 0\r\n> \r\n> base = self.base\r\n> if seq_len > self.max_position_embeddings:\r\n> base = self.base * (\r\n> (self.scaling_factor * seq_len / self.max_position_embeddings) - (self.scaling_factor - 1)\r\n> ) ** (self.dim / (self.dim - 2))\r\n> \r\n> inv_freq = 1.0 / (base ** (torch.arange(0, self.dim, 2).float().to(device) / self.dim))\r\n> self.register_buffer(\"inv_freq\", inv_freq)\r\n> \r\n> t = torch.arange(seq_len, device=device, dtype=self.inv_freq.dtype)\r\n> \r\n> freqs = torch.einsum(\"i,j->ij\", t, self.inv_freq)\r\n> # Different from paper, but it uses a different permutation in order to obtain the same calculation\r\n> emb = torch.cat((freqs, freqs), dim=-1)\r\n> self.cos_cached = emb.cos()[None, None, :, :]\r\n> self.sin_cached = emb.sin()[None, None, :, :]\r\n> ```\r\n\r\nThere is a precision difference between the `inv_freq` here and the `inv_freq` defined in `__init__`, and the reason is not found. In order to ensure the same performance as the original when `seq_len <= self.max_position_embeddings`, it can only be modified to this form.\r\n\r\n```\r\n def _set_cos_sin_cache(self, seq_len, device):\r\n self.max_seq_len_cached = 0\r\n\r\n if seq_len > self.max_position_embeddings:\r\n base = self.base * (\r\n (self.scaling_factor * seq_len / self.max_position_embeddings) - (self.scaling_factor - 1)\r\n ) ** (self.dim / (self.dim - 2))\r\n inv_freq = 1.0 / (base ** (torch.arange(0, self.dim, 2).float().to(device) / self.dim))\r\n else:\r\n inv_freq = self.inv_freq\r\n\r\n t = torch.arange(max(seq_len, self.max_position_embeddings), device=device, dtype=inv_freq.dtype)\r\n\r\n freqs = torch.einsum(\"i,j->ij\", t, inv_freq)\r\n # Different from paper, but it uses a different permutation in order to obtain the same calculation\r\n emb = torch.cat((freqs, freqs), dim=-1)\r\n self.cos_cached = emb.cos()[None, None, :, :]\r\n self.sin_cached = emb.sin()[None, None, :, :]\r\n```"
] | 1,688 | 1,689 | 1,689 |
MEMBER
| null |
# What does this PR do?
This is an experimental PR for discussion, so we can decide whether to add this pattern.
## Context
In the past week, there have been several developments about scaling RoPE (Rotary Position Embeddings, i.e. Llama's position embeddings) so as to be able to extrapolate beyond 2048 tokens. Without any scaling and/or finetuning, the perplexity quickly explodes when we go beyond 2048 tokens. Here's the sequence of RoPE scaling improvements, announced mostly on Reddit:
1. Linear scaling -- Simply divide the position index by a scaling factor. Needs fine-tuning to observe the best results. Discussed in [this lmsys blog post](https://lmsys.org/blog/2023-06-29-longchat/). Credits to the reddit user `/u/kaiokendev`.
2. NTK-aware scaling -- proposed in [this reddit thread](https://www.reddit.com/r/LocalLLaMA/comments/14lz7j5/ntkaware_scaled_rope_allows_llama_models_to_have/). Scaling the RoPE Fourier space linearly is not optimal to evenly distribute information, so this can be seen as a improved linear scaling. Works okay without fine-tuning, but seems to benefit from it. Credits to the reddit user `/u/bloc97`. EDIT: following the comments in this thread, this technique will not be added!
3. Dynamic NTK scaling -- proposed in [this reddit thread](https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/). It's a form of NTK-aware scaling that a) [works quite well without fine-tuning](https://preview.redd.it/2qdj7itsb39b1.png?width=662&format=png&auto=webp&v=enabled&s=f9b2f044f59fbad5ad51fefacda0b61f724f12f1); b) doesn't degrade the performance if the model is used with short sequences; c) gracefully scales to long sequences, under a fixed parameterization. Credits to the reddit user `/u/emozilla`.
## Changes in the PR
The goal of this PR is to debate whether we want to include RoPE scaling support, with working code as reference. The field is evolving quite fast, so I've added it in a way that we can quicky add to new scaling strategies and keep surfing the wave 🏄 Of course, the implementation itself is up for discussion! (An alternative implementation would be to have separate classes for the scalable RoPEs)
Pros:
- Flexible implementation that allows adding new scaling methods in minutes;
- Works quite well with pre-trained models (see example below), through dynamic NTK scaling;
- Supports strategies that are compatible with fine-tuning (it is unclear whether dynamic NTK works well with fine-tuning, and [it seems like Linear scaling is better after fine-tuning](https://www.reddit.com/r/LocalLLaMA/comments/14ojd7s/summary_post_for_higher_context_sizes_for_this/))
Cons:
- `rope_scaling` is a dictionary input, which is somewhat undesirable;
- additional if/else branches in RoPE
## Example
Consider the following prompt from a paper transcript, containing ~6k tokens:
<details>
<summary> prompt built from the transcript of https://arxiv.org/abs/2306.15595 </summary>
```py
prompt = '''
You are given this machine learning research paper, please read it carefully and answer the follow up question.
=== BEGIN ===
2306.15595v2 [cs.CL] 28 Jun 2023
arXiv
EXTENDING CONTEXT WINDOW OF LARGE LAN-
GUAGE MODELS VIA POSITION INTERPOLATION
Shouyuan Chen Sherman Wong Liangjian Chen Yuandong Tian
Meta Platforms Inc.
{chenshouyuan, shermanwong, cli, yuandong}@meta . com
1 INTRODUCTION
Large language models (LLMs) typically come with a pre-defined context window size. For exam-
ple, inputs to LLaMA models (Touvron et al., 2023) must be fewer than 2048 tokens. This pre-set
context window limit is frequently exceeded in applications such as conducting long conversations,
summarizing long documents, or executing long-term planning. For these applications, LLMs with
longer context windows are preferred. However, training an LLM from scratch with long context
windows requires significant investments. This naturally leads to a question: Can we extend the
context window of an existing pre-trained LLM?
One straightforward approach is to fine-tune an existing pre-trained Transformer with a longer con-
text window. However, empirically, we found that models trained this way adapt to long context
windows very slowly. After training for more than 10000 batches, the effective context window
saw a minimal increase, moving from 2048 to 2560 (Table 4). This suggests that such method is
inefficient for extending to substantially longer context windows.
While certain techniques such as ALiBi (Press et al., 2022) and LeX (Sun et al., 2022) enable length
extrapolation of Transformers, i.e. train on short context windows and inference on longer ones,
many existing pre-trained LLMs, including LLaMA (Touvron et al., 2023), use positional encodings
that have weak extrapolation properties (e.g., RoPE (Su et al., 2021)). Therefore, the applicability
of these techniques for extending the context window sizes of such LLMs remains limited.
In this work, we introduce Position Interpolation to enable context window extensions for certain
existing pre-trained LLMs, including LLaMA. The key idea is, instead of extrapolation, we directly
down-scale the position indices so that the maximum position index matches the previous context
window limit in the pre-training stage. See Figure 1 for an illustration. In other words, to accom-
modate more input tokens, we interpolate the position encodings at neighboring integer positions,
utilizing the fact that position encodings can be applied on non-integer positions, as opposed to
extrapolating outside the trained positions, which may lead to catastrophic values. We verify our
approach theoretically, by showing that the interpolated attention score has a much smaller upper
bound (~ 600x smaller in LLaMA 7B setting) than the extrapolated one, and is thus much more
stable. Therefore, interpolated position encodings are easier for the model to adapt.
Empirically, we found that Position Interpolation is highly effective and efficient, requiring only a
very short period of fine-tuning for the model to fully adapt to greatly extended context windows.
We present experimental results for extending the context window to up to 32768 from the initial
2048 across 7B to 65B LLaMA models using Position Interpolation. Our results show that
1. Position Interpolation can easily enable very long context windows (e.g. 32768), requiring
only fine-tuning for 1000 steps on the Pile (Gao et al., 2020) to achieve a good quality.
The cost of fine-tuning is negligible compared to the pre-training costs. This confirms
our hypothesis that it is relatively easy for the models to adapt to interpolated position
encodings.
2. Position Interpolation generates strong models that can effectively make use of much ex-
tended context window. We show that models extended by Position Interpolation enjoy
significant perplexity gains from greatly extended context windows for text modeling, and
we show that the perplexity reduces graceful with the enlargement of context windows.
We also applied Position Interpolation in a long text summarization task, and demonstrate
competitive performances.
3. Position Interpolation preserves model quality relatively well for tasks within its original
context window sizes. We present a variety of evaluation results for the extended LLaMA
models on the original LLaMA benchmark. Compared with original LLaMA models, the
extended LLLaM A models saw a minor degradation on several standard benchmarks within
a 2048 token limit.
Our results highlight the innate ability of Transformer models to “extrapolate to sequence lengths
longer than the ones encountered during training” as hypothesized in the seminal work of Vaswani
et al. (2017). We reaffirm this hypothesis and suggest that the previously known weakness of ex-
trapolating to longer sequences for language modeling (Press et al., 2022) may be due to direct
extrapolation of positional encodings and it can be largely mitigated by interpolating position en-
codings instead.
Concurrent work. Right before our release, we are informed with a concurrent blogpost (Super-
HOT kaiokendev (2023)) that also interpolates positional encoding in RoPE to extend the context
window from 2K to 8K. Recently, open source community picks it up in Reddit post ! and Github
Issues 2, which shows that fine-tuning with LoRA (Hu et al., 2021) also seems to work well. Our
paper shows a full fine-tuning with up to 65B model work well with Position Interpolation, and we
also give theoretical explanations why interpolation achieves much more stable results than extrap-
olation, by showing that the upper bound of interplated attention score is much lower than that of
extrapolated ones.
2 METHOD
2.1 BACKGROUND: ROTARY POSITION EMBEDDING (ROPE)
Transformer models require explicit positional information to be injected, typically in the form of
positional encodings, to represent the order of inputs. We consider Rotary Position Embedding
(ROPE) (Su et al., 2021), which is the position encoding used in the LLLaMA model (Touvron et al.,
2023). Given a position index m € [0, ¢) and an embedding vector x := [zg, 71,..., 241], Where
d is the dimension of the attention head, RoPE defines a vector-valued complex function f{x, m) as
follows
Using RoPE, the self-attention score
is only dependent on relative position m — 7 through trigonometric functions. Here q and k are the
query and key vector for a specific attention head. At each layer, RoPE is applied on both query and
key embeddings for computing attention scores.
2.2 DIRECT EXTRAPOLATION
While the attention score in RoPE only depends on the relative positions, which is what we want,
its extrapolation performance is not great . In particular, when directly extending to larger context
windows unseen in the training, the perplexity may shoot up to very high numbers (i.e., > 10%),
comparable to untrained models.
Ideally, we want to see the model trained on a context window of size L = 2048 to still work
reasonably well on longer context window, but may not have the capability to leverage information
that appears beyond L. For example, to answer a question located at 3000, the model trained on
maximal window size of I = 2048 cannot leverage evidences provided at location 0, but still
can leverage the evidences provided at location 2900. In contrast, in reality we see catastrophic
behaviors, i.e., question at location 3000 cannot be answered correctly, even if the evidences are
located at location 2900.
What is the reason behind? How could this happen if the attention score a,,,—,, decays as the relative
distance |m — n/| increases, according to Section 3.4.3 of (Su et al., 2021), and content from very
far distances should not matter that much? It turns out that the upper bound derived in Section 3.4.3
of (Su et al., 2021) may be too loose: while it indeed decays with respect to |m — nl, the bound
can still be quite large (i.e., the bound can be critically depends on the magnitude of v;) and thus
vacuous. In fact, if we treat all trigonometric functions as basis functions (i.e, ¢;(s) := #93), and
think about Eqn. 2 as basis expansion as the following:
where s is the positional span between a query and a key and h; := (ga; + igaj+1){k2j — tk2j+1)
are complex coefficients depending on q and k (here the definition of h; is exactly the same as the
definition of k; in Sec 3.4.3 in RoPE (Su et al., 2021)). Now the the issue becomes clear: as shown
in Fig. 2, a, can be small in magnitude in the range of [0, 2048], but gives huge values out of the
region. The underlying reason is that the trigonometric family {¢;} (with sufficiently large d) is
a universal approximator and can fit any arbitrary functions. Therefore, for a, there always exist
coefficients {h;} (i.e. key and query) that corresponds to small function values in [0, 2048] but
much larger in regions beyond.
2.3 PROPOSED APPROACH: POSITION INTERPOLATION (PI)
In Fig. 2, thanks to the smoothness of bases functions ¢; interpolation is much more stable and will
not lead to wild values. Therefore, instead of extrapolate the attention score in Eqn. 3 to s > L,
how about we define an attention score a{s) = a(Ls/L’) where L’ is the longer context window?
Formally, we replace RoPE f by {’ defined as follows
We call this transformation on the position encoding Position Interpolation. In this step, we reduce
position indices from [0, L') to [0, L) to match the original range of indices before computing RoPE.
Consequently, as inputs to RoPE, the maximum relative distance between any two tokens has been
reduced from I’ to L. Since we align the ranges of position indices and relative distances before
and after extension, we mitigate the effect on attention score computation due to context window
extensions, which can allow the model easier to adapt. To further demonstrate this is the case, in the
following theorem, we show that the interpolated attention score is well-behaved:
While there is no close form for B(s) := 4/21 |Ag41(s)|, numerically it is at least larger than d, and for many positional difference s, B(s) is much larger than d
(check Appendix B for the plot). Therefore, the interpolation bound is at least 2 - 294.73 ~ 600 x
smaller than the extrapolation bound, and thus the interpolated attention score is much more stable
than extrapolated one.
Notably, our method of rescaling of position indices does not introduce extra weight, or modify
the model architecture in any way. This makes it attractive in practical applications, since most
infrastructure and optimization for the original model can be reused after the extension.
Fine-tuning. We can further fine-tune the interpolated model using the next token prediction task
with interpolated position encodings on the extended context window size using a pre-training cor-
pus such as the Pile (Gao et al., 2020). In the next section, we show that our fine-tuning process
only needs tens to hundreds thousands of examples. We also find that the result of the fine-tuning
is not sensitive to the choice of examples. The reason may be that the model is only adapting to the
new context window during the fine-tuning phase, starting from a good initialization, as opposed to
acquiring new knowledge.
Other ways to reduce interpolation/extrapolation bound. From the expression of the interpola-
tion (Eqn. 5) and extrapolation bound (Eqn. 8), a common term is max; ||, which is the maximal
magnitude of query/key products. If we enforce a regularization on || during LLM training, it is
possible that the catastrophic extrapolation error can be mitigated or even resolved. In fact, if we
apply ridge regression with proper regularization to fit a curve in Fig. 2, the magnitude of extrapo-
lated a(s) when s > L can be comparable to that within [0, L]. To our knowledge, we are not aware
of existing LLM pre-training techniques that leverage this regularization and will leave it for future
work.
3 EXPERIMENTS
We show Position Interpolation can effectively extend context window up to 32 times of the original
size, and such extension can be done with only several hundreds of training steps. We show the
resulting models are strong LLMs with fully effective long context windows. We demonstrate its
performance in a number of tasks including language modeling, passkey retrieval, and long doc-
ument summarization. We also present benchmark results of the extended models on the original
LLaMA evaluation benchmarks.
3.1 SETUP
Model Variants. We extended the pre-trained 7B, 13B, 33B and 65B LLaMA models (Touvron
et al., 2023) to various context window of sizes up to 32768, using either direct fine-tuning or
Position Interpoloation method. Except for rescaling the position indices for models extended with
Position Interpolation, we did not modify LLaMA model architectures (Touvron et al., 2023) in any
ways.
Training Procedure. We fine-tune all model variants using the next token prediction objective. We
use AdamW (Loshchilov & Hutter, 2019) with 5; = 0.9 and 2 = 0.95. We use a linear learning
rate warmup of 20 steps starting from 10% of the maximum learning rate. For 7B and 13B models,
we set the learning rate to 2 x 1075 and for 33B and 65B models we set the learning rate to 1072. We
set the weight decay to zero. For extending 7B, 13B and 33B models to the 8192 context window
size, we use 32 A100 GPUs and 64 global batch size. For all other cases we use 128 A100 GPUs and
128 global batch size. We note that the main need of using more GPUs is memory limitation during
fine-tuning, and it is possible to use fewer GPUs in certain cases. We train all models using PyTorch
(Paszke et al., 2019) with Fully Sharded Data Parallel (Zhao et al., 2023) and Flash Attention (Dao
et al., 2022).
If not specified otherwise, for the Position Interpolation method, we fine-tune the models for 1000
steps. For the direct fine-tuning method, we use 10000 steps. We primarily fine-tune using the Pile
training dataset (Gao et al., 2020). In Section 3.4 we also compared fine-tuning performance on the
RedPajama dataset (Computer, 2023).
3.2 LONG SEQUENCE LANGUAGE MODELING
We evaluate the long sequence language modeling performance of our extended models and base-
lines on two datasets: book corpus (PG-19) (Rae et al., 2020) and cleaned Arxiv Math proof-pile
dataset (Azerbayev et al., 2022).
We use the test splits of PG19 (Rae et al., 2020) and proof-pile (Azerbayev et al., 2022). For PG19,
we use the whole test split consisting of 100 documents. For the proof-pile dataset, we use a random
subsample of 128 documents with at least 32768 SentencePiece (Kudo & Richardson, 2018) tokens
and truncate to the first 32768 tokens for each test document. We evaluate perplexity at various
context window size by using a sliding window approach following Press et al. (2022) with stride
S = 256.
In Table 1 and Table 2, we report the perplexity results for our models and baselines on the datasets.
From the results, we found that models extended with our method enjoy a significantly improved
perplexity from longer context window sizes. By increasing the context window size from 2048 to
16384, we observed -0.28 and -0.5 reductions of perplexity for extending LLaMA 7B models on
both datasets, -0.27 and -0.48 reductions for extending LL.aMA 13B models, and -0.14 and -0.42
reductions for extending LLaMA 33B models. For LLaMA 65B models, we observed -0.12 and
-0.3 reductions of perplexity by extending to the 8192 context window size.
In general, we observed a consistent trend of our models achieving better perplexity with longer
context windows. This indicates our models can effectively make use of the longer context windows
to better predict next tokens in language modeling tasks. Moreover, we found this trend extends to
32768 window size without diminishing on the PG19 dataset for LLaMA 7B and 13B models. This
indicates that our method may enable extension to even longer context windows.
In contrast, we observed that models extended via the direct fine-tuning method has shown regres-
sion (up to +0.48) or minor improvement (up to -0.12) on the perplexity at longer context windows.
This indicates that models extended this way have limited capability of making use of context win-
dows longer than their pre-trained settings.
We saw a minor degradation of the perplexity on the original context window of 2048 for our ex-
tended models in some cases. For example, on the Proof-pile dataset, we saw a degradation ranging
from 0.01 to 0.05 across all models with extended with Position Interpolation. A small degradation
of performance within original evaluation context window is expected since Position Interpolation
forces position encodings in original context window to reside in a much narrower region, which
may negatively affect the language model’s performance. We present more benchmark results on
the original context window size in Section 3.4.
In Table 3 we report the relationship between perplexity and the number of fine-tuning steps for
LLaMA 7B model extending to 8192 and 16384 context window sizes using Position Interpolation
evaluated on the PG19 dataset. We can see without fine-tuning (at step 0) the model can exhibit
certain language modeling capability, as indicated by < 20 perplexity for extending to 8192 context
window (in contrast, the direct extrapolation method leads to > 10% perplexity). With fine-tuning,
we observed that the perplexity improves quickly. At 200 steps the models surpassed the original
model’s perplexity on 2048 context window size, indicating the models gaining ability of effectively
using sequences longer than the pre-training settings for language modeling. At 1000 steps, we can
see the models have improved steadily and achieve a significantly better perplexity.
3.3 MEASURING EFFECTIVE CONTEXT WINDOW SIZE THROUGH PASSKEY RETRIEVAL
We study the effective context window size, i.e. the maximum distance of a token can effectively
attend to during inference, of our models after extension. To measure this, we follow a synthetic
evaluation task of passkey retrieval proposed by Mohtashami & Jaggi (2023). In this task, the models
are asked to recover a random passkey hidden in a long document. See Figure 3 for the format of
the document.
Given a language model, we estimate the upper and lower bounds of effective context windows as
follows. Suppose the random passkey is k tokens away from the end of the input. When a model
persistently fails to retrieve the correct passkey value across several independent attempts, it suggests
that the effective context window size of the model is less than k. Conversely, if a model consistently
succeeds in retrieving the correct passkey value, we deduce that the effective context window size
of the model is at least k.
We evaluate the 7B and 33B LLaMA model variants that are extended via Position Interpolation or
direct fine-tuning. For each model, we use 32 different &£ uniformly spaced in the targeted context
window L’ and run the above tests for 10 times for each k, where each time a random passkey of 5
random digits is used. In Table 4, we report kyax as a function of the number of fine-tuning steps,
We can see that models extended via Position Interpolation all successfully attain their desired ex-
tension objectives in terms of effective context window sizes, indicating by the effective context
window size reaching maximum kp, = L/, after merely fine-tuning for 200 steps, consistently
across both 7B and 33B model sizes and up to 32768 context windows. In contrast, LLLaMA models
that are extended via direct fine-tuning only saw a minimal increase of the effective context win-
dow size kay from 2048 to 2560, even after fine-tuning for more than 10000 steps, with no clear
indication of an acceleration in the increase of window size.
3.4 BENCHMARKS ON ORIGINAL CONTEXT WINDOW SIZE
We evaluate the models extended by Position Interpolation on several standard benchmark tasks
within the original context window size of 2048. The evaluation results are listed in Table 5. From
the results, we saw that models extended to 8192 produce comparable results on the original bench-
mark which is designed for a much smaller context window, with a degradation of up to 2% on
the benchmark tasks, for both 7B and 33B model sizes. Models extended to longer context win-
dows regressed more on the benchmarks, but still in reasonable ranges for most tasks. We also note
that the choice of fine-tuning datasets does not seem to lead significant difference in the benchmark
performances, which may be due to the limited number of fine-tuning steps used in our method.
The regression on benchmark tasks is consistent with our observation on perplexity regression in
Section 3.2.
3.5 LONG DOCUMENT SUMMARIZATION
In this task, we evaluate our models’ performance on the long document summarization task. In
particular, we consider the GovReport (Huang et al., 2021) dataset, which contains 17457 documents
for training and 972 documents for evaluation. Each document comes with a human generated
summary. We truncate all input documents to their first 15000 tokens.
We fine-tune the LL.aMA models extended with Position Interpolation with a context window of
16384. Note the rescaling of position indices are still required during this fine-tuning step. We first
Model Size Context Window Fine-tune on BoolQ PIQA Race-M Race-H WinoGrande
format the raw document using the prompt template in Figure 4, and then concatenate the prompt
with the ground-truth summary (truncate to 1000 tokens) associated with each document. We fine-
tune the model using the next token prediction task with the above setup for 10 epochs. The losses
from the input prompt proportion of training examples are excluded during our fine-tuning.
We use a generation temperature of 0.5 and top, = 0.95 as our inference parameter to generate a
summarization of each document in the test set. The final output is truncated at 1000 tokens. We
used the ROUGE-1/ROUGE-2/ROUGE-L scores (Lin, 2004) as the evaluation metrics to evaluate
the models’ outputs vs the ground-truth summaries.
In Table 6 we report our evaluation results. We have also included results from two baselines in
existing SCROLLS Leaderboard (Shaham et al., 2022; Ainslie et al., 2023). In general, we have
obtained competitive R1 score among other models with minimal tuning of hyper-parameters. This
result suggests our models with 16384 context window can effectively handle the long document
summarization task.
=== END OF FILE ===
'''
```
</details>
If we place it in the following example
```py
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("huggyllama/llama-7b")
model = AutoModelForCausalLM.from_pretrained(
"huggyllama/llama-7b",
load_in_8bit=True,
device_map="auto",
)
prompt = ...
question = "Question: What is the paper about?"
inputs = tokenizer(prompt + question, return_tensors="pt").to("cuda")
print(inputs.input_ids.shape)
gen_out = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.batch_decode(gen_out)[0])
```
we get:
```
Question: What is the paper about? a a a a a a a a a a a a b: a a a a a a: a in a a a a a a [(b a b. a [b [b [b. [b [b [( [( [( [( [( [( [b [(b [b [b
[b [(( [((: [(: [: [: [((((((0:(((((al:
```
However, if we add `rope_scaling={"type": "dynamic", "factor": 2.0}` in `from_pretrained`, we now get:
```
Question: What is the paper about?
Answer: The paper is about extending the context window of Transformer models.
Answer: The paper is about extending the context window of Transformer models.
Answer: The paper is about extending the context window of Transformer models.
Answer: The paper is about extending the context window of Transformer models.
Answer: The paper is about extending the context window of Transformer models.
Answer: The paper is about extending the context window of Transformer models.
Answer: The
```
Better generation parameterization can definitely be selected, but you get the idea -- with these changes, models with RoPE can handle much larger contexts right out of the box 🔥
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24653/reactions",
"total_count": 18,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 18,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24653/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24653",
"html_url": "https://github.com/huggingface/transformers/pull/24653",
"diff_url": "https://github.com/huggingface/transformers/pull/24653.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24653.patch",
"merged_at": 1689263250000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24652
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24652/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24652/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24652/events
|
https://github.com/huggingface/transformers/pull/24652
| 1,788,260,133 |
PR_kwDOCUB6oc5Uor3m
| 24,652 |
fixing name position_embeddings to object_queries
|
{
"login": "Lorenzobattistela",
"id": 70359945,
"node_id": "MDQ6VXNlcjcwMzU5OTQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/70359945?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lorenzobattistela",
"html_url": "https://github.com/Lorenzobattistela",
"followers_url": "https://api.github.com/users/Lorenzobattistela/followers",
"following_url": "https://api.github.com/users/Lorenzobattistela/following{/other_user}",
"gists_url": "https://api.github.com/users/Lorenzobattistela/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lorenzobattistela/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lorenzobattistela/subscriptions",
"organizations_url": "https://api.github.com/users/Lorenzobattistela/orgs",
"repos_url": "https://api.github.com/users/Lorenzobattistela/repos",
"events_url": "https://api.github.com/users/Lorenzobattistela/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lorenzobattistela/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@Lorenzobattistela For the repo consistency and quality checks, you'll need to run `make fix-copies` and then `make style` and push any changes made ",
"@amyeroberts Done, just updated with the changes for repo consistency and quality. I don't know why, but testing pipelines and torch tests are failling within the installation step (but I did not changed anything related to it), and the test_worflow also failed just for torch. I'll wait for next instructions. Thanks!",
"@Lorenzobattistela hmmmm, interesting. Could you try rebasing on main? \r\n\r\nSome of the tests are failing because of the changes in this PR: https://app.circleci.com/pipelines/github/huggingface/transformers/67974/workflows/6a69bd9f-d35a-4964-868b-14fdd921d813/jobs/850696\r\n\r\nOnce these are resolved, ping me again and I can review :)",
"@amyeroberts Sorry for bothering, but I'm having a hard time with the circleCi testing. So, I'm having problems on repo consistency (as you mentioned before), but if I do run the script `make fix-copies` it change other models files (3 of them), and I think this would be scaping the Issue scope.\r\n\r\nAbout the tests, I'm getting the following output:\r\n```\r\nFAILED tests/models/detr/test_modeling_detr.py::DetrModelTest::test_attention_outputs - RuntimeError: The size of tensor a (12) must match the size of tensor b (49) at non-singleton dimension 1\r\nFAILED tests/models/detr/test_modeling_detr.py::DetrModelTest::test_determinism - RuntimeError: The size of tensor a (12) must match the size of tensor b (49) at non-singleton dimension 1\r\nFAILED tests/models/detr/test_modeling_detr.py::DetrModelTest::test_detr_model - RuntimeError: The size of tensor a (12) must match the size of tensor b (49) at non-singleton dimension 1\r\nFAILED tests/models/detr/test_modeling_detr.py::DetrModelTest::test_detr_no_timm_backbone - RuntimeError: The size of tensor a (12) must match the size of tensor b (49) at non-singleton dimension 1\r\nFAILED tests/models/detr/test_modeling_detr.py::DetrModelTest::test_detr_object_detection_head_model - RuntimeError: The size of tensor a (12) must match the size of tensor b (49) at non-singleton dimension 1\r\nFAILED tests/models/detr/test_modeling_detr.py::DetrModelTest::test_different_timm_backbone - RuntimeError: The size of tensor a (12) must match the size of tensor b (49) at non-singleton dimension 1\r\nFAILED tests/models/detr/test_modeling_detr.py::DetrModelTest::test_feed_forward_chunking - RuntimeError: The size of tensor a (12) must match the size of tensor b (49) at non-singleton dimension 1\r\nFAILED tests/models/detr/test_modeling_detr.py::DetrModelTest::test_greyscale_images - RuntimeError: The size of tensor a (12) must match the size of tensor b (49) at non-singleton dimension 1\r\nFAILED tests/models/detr/test_modeling_detr.py::DetrModelTest::test_hidden_states_output - RuntimeError: The size of tensor a (12) must match the size of tensor b (49) at non-singleton dimension 1\r\nFAILED tests/models/detr/test_modeling_detr.py::DetrModelTest::test_retain_grad_hidden_states_attentions - RuntimeError: The size of tensor a (12) must match the size of tensor b (49) at non-singleton dimension 1\r\nFAILED tests/models/detr/test_modeling_detr.py::DetrModelTest::test_save_load - RuntimeError: The size of tensor a (12) must match the size of tensor b (49) at non-singleton dimension 1\r\nFAILED tests/models/detr/test_modeling_detr.py::DetrModelTest::test_training - RuntimeError: The size of tensor a (12) must match the size of tensor b (49) at non-singleton dimension 1\r\n=== 12 failed, 1419 passed, 2461 skipped, 144 warnings in 163.96s (0:02:43) \r\n```\r\n\r\nThe funny thing is that I did not changed anything related to tensor sizes, since it was just naming convention",
"@Lorenzobattistela No worries, you're not bothering at all :) \r\n\r\n> if I do run the script make fix-copies it change other models files (3 of them), and I think this would be scaping the Issue scope.\r\n\r\nIt's OK, we do want the changes made by `make fix-copies` included in this PR. `make fix-copies` makes sure that changes to the code are propagated across to all part of the codebase where the logic has been copied without the tedium or riskiness of doing it manually. This allows us to keep the one file per model pattern in the library.\r\n\r\n> The funny thing is that I did not changed anything related to tensor sizes, since it was just naming convention\r\n\r\nHmmm, funny. It might be that there's a var somewhere still needing it's name changed, or it could be how the model's being called in the tests. I'd suggest picking just one test and run that with the debugger to find where the issue is coming from i.e. \r\n\r\n```\r\npytest tests/models/detr/test_modeling_detr.py::DetrModelTest::test_attention_outputs --pdb\r\n``` \r\n\r\nand comparing the tensor shapes with and without the changes in this PR to track where they're coming from. \r\n\r\n\r\n",
"@amyeroberts Got it working! It was a problem with `make fix-copies`, so some other files had to change to keep consistency and pass up the tests. Now it's all set!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24652). All of your documentation changes will be reflected on that endpoint.",
"@amyeroberts finished doing what was discussed. I think we can also think about refactoring and add it as a function, something like `check_kwargs()` , idk.\r\nBecause it was mostly duplicated accross all files. What do you think about it?\r\n\r\nWeird, the error on CI has nothing to do with the files changed, its on other model"
] | 1,688 | 1,693 | 1,693 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR refers to #19833 , and it just update some variables/docstrings names. Quoting the Issue, the paper mentions that the `position_embeddings` argument of the cross-attention layer are these input embeddings called `object queries`. And the `key_value_position_embeddings` is refered to as `spatial_position_embeddings`.
Reopening PR #23091
This PR is limited to DETR model.
### Notes
This is my first contribution, so I'm happy to adjust anything in this PR. I ran all tests and style, and it went all, except for one:
`make fixup`. I got the following output:

Reading the output, I assume it is about other file using classes in modeling_detr. I'll wait for updates. I will also wait for review for doc updating or more guidance.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
https://github.com/huggingface/transformers/issues/19833
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge
@amyeroberts
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24652/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24652/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24652",
"html_url": "https://github.com/huggingface/transformers/pull/24652",
"diff_url": "https://github.com/huggingface/transformers/pull/24652.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24652.patch",
"merged_at": 1693296586000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24651
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24651/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24651/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24651/events
|
https://github.com/huggingface/transformers/pull/24651
| 1,788,257,949 |
PR_kwDOCUB6oc5UorZH
| 24,651 |
Update image_question_answering.py
|
{
"login": "mzamini92",
"id": 32536264,
"node_id": "MDQ6VXNlcjMyNTM2MjY0",
"avatar_url": "https://avatars.githubusercontent.com/u/32536264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mzamini92",
"html_url": "https://github.com/mzamini92",
"followers_url": "https://api.github.com/users/mzamini92/followers",
"following_url": "https://api.github.com/users/mzamini92/following{/other_user}",
"gists_url": "https://api.github.com/users/mzamini92/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mzamini92/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mzamini92/subscriptions",
"organizations_url": "https://api.github.com/users/mzamini92/orgs",
"repos_url": "https://api.github.com/users/mzamini92/repos",
"events_url": "https://api.github.com/users/mzamini92/events{/privacy}",
"received_events_url": "https://api.github.com/users/mzamini92/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @LysandreJik ",
"Hey @mzamini92! We want these tools to be the simplest possible so that all agents can use them appropriately.\r\n\r\nI recommend pushing your tool to the Hub instead, and replacing the existing ImQA tool with yours as explained in this guide: https://huggingface.co/docs/transformers/custom_tools#replacing-existing-tools"
] | 1,688 | 1,688 | 1,688 |
NONE
| null |
In this modified version, the main changes are as follows:
The `encode` method now accepts a list of images and questions, and it returns a `DataLoader` object that batches the encoded inputs. This enables batch processing of multiple image and question pairs.
The `forward` method processes the inputs in batches using a `DataLoader` object. Each batch is sent to the device and processed by the model. The outputs are collected and concatenated along the batch dimension.
The `decode` method processes the outputs for each example in the batch and returns a list of answers.
The `description` and `inputs` sections are updated to reflect the changes and mention that the inputs should be provided as a list.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24651/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24651/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24651",
"html_url": "https://github.com/huggingface/transformers/pull/24651",
"diff_url": "https://github.com/huggingface/transformers/pull/24651.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24651.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24650
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24650/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24650/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24650/events
|
https://github.com/huggingface/transformers/issues/24650
| 1,788,250,226 |
I_kwDOCUB6oc5qlohy
| 24,650 |
CLIP pooling is not compatible with adding new tokens
|
{
"login": "okaris",
"id": 1448702,
"node_id": "MDQ6VXNlcjE0NDg3MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1448702?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/okaris",
"html_url": "https://github.com/okaris",
"followers_url": "https://api.github.com/users/okaris/followers",
"following_url": "https://api.github.com/users/okaris/following{/other_user}",
"gists_url": "https://api.github.com/users/okaris/gists{/gist_id}",
"starred_url": "https://api.github.com/users/okaris/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/okaris/subscriptions",
"organizations_url": "https://api.github.com/users/okaris/orgs",
"repos_url": "https://api.github.com/users/okaris/repos",
"events_url": "https://api.github.com/users/okaris/events{/privacy}",
"received_events_url": "https://api.github.com/users/okaris/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Are you taking this on @ydshieh ? 😉 ",
"Not yet started, but self-assign so I won't forget. ",
"@okaris\r\n\r\nIf I understand correctly, the goal is to use the (fixed) eos id, rather than using `argmax`. Is this right?",
"@patrickvonplaten We need your experiste on `diffusers` for this issue 🙏 Thank you.",
"@ydshieh correct, because newly added tokens to the tokenizer take ids bigger than the eos id. The tokenizer config has the correct information but might not be readily available in the text model",
"Thanks @okaris \r\n\r\nYes, the (text) model config file has `eos_token_id` being `2` which is even worse situation. We can probably try to use `vocab_size - 1`, but I want to have some discussion with the team first to take action.",
"Oh yes, the text encoder has the wrong one. Length - 1 might not work if you are adding more tokens. The easy solution is to expose the eos id in the encoder model so it can be changed from the outside",
"This is indeed a problem here - @patil-suraj can you take a look here? ",
"Once we add more tokens (so `vocab_size` will change), I think one way to replace the `argmax` is to use `vocab_size - config.num_extra_tokens - 1`, where `num_extra_tokens` is a new attribute added to the config (default value would be `0`).\r\n\r\nHappy to see if there are better/clean solution ideas.",
"when adding a token, the input embeddings are (and must be) resized. that length could also be used",
"I might be wrong, but isn't the new length of the (resized) embedding layer just the new `vocab_size`?",
"Thanks a lot for the issue @okaris !\r\n\r\nIMO, updating the `eos_token_id` in the config would be better than adding a new `config` attribute. As far as I can tell, this should not break anything because `config` is never really used for tokenization, the `config` is used to get the `eos_token_id` if we are doing generation, but the CLIP model is not used for generation and also the current `config.eos_token_id` is incorrect, so updating the `eos_token_id` should be safe. We can send a mass PR on the hub to update this (cc @patrickvonplaten ) \r\n\r\nWhat do you think @ydshieh @patrickvonplaten ?",
"> the CLIP model is not used for generation --> not break anything \r\n\r\nSound correct! Let's have some word from the core maintainers (@amyeroberts and @sgugger) however.\r\n\r\n",
"Even if we don't use the `eos_token_id` from the config doesn't mean nobody else does! \r\n\r\nThat being said, as @patil-suraj points out, the `eos_token_id` is wrong. I don't think it could be meaningfully or correctly used anywhere so happy for it to be updated. \r\n\r\nIt makes me think we should add some tests to make sure the model and tokenizer mappings are aligned when added to the library - at least as an integration check for an example checkpoint. ",
"FYI: the inconsistency between config and the tokenier/processor is (one of) the main reason we **had** trouble in pipeline testing (using tiny models). I had make some extra work to avoid this problem (in the context of creating tiny models)",
"Fixed by #24777 \r\n\r\n@okaris \r\n\r\nLet me know if this works well in your case, thank you!",
"Thanks @ydshieh looks like it will work for me as well. "
] | 1,688 | 1,689 | 1,689 |
NONE
| null |
### System Info
Feature request (Duplicate of #21029)
For textual inversion in diffusers, we are adding tokens that have a higher token id than the eos token. So when we get clip embeddings for textual inv tokens, we need to change the pooling so it gets the eos token and not the arg max token.
Motivation
This is an issue that should be fixed as the clip embeddings won't work once we add more tokens to the tokenizer. This hasn't been a huge issue so far because most models use the hidden layers directly but [the new paper on SDXL](https://github.com/Stability-AI/generative-models/blob/main/assets/sdxl_report.pdf) also mentions using the pooled output now.
@ArthurZucker @younesbelkada
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Add new token to tokenizer
2. Encode tokens with CLIPTextModel
3. Get pooled output
### Expected behavior
Pooled output considers the added token ids vs eos id instead of argmax
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24650/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24650/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24649
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24649/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24649/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24649/events
|
https://github.com/huggingface/transformers/pull/24649
| 1,788,178,229 |
PR_kwDOCUB6oc5UoaE6
| 24,649 |
Update warning messages reffering to post_process_object_detection
|
{
"login": "rafaelpadilla",
"id": 31217453,
"node_id": "MDQ6VXNlcjMxMjE3NDUz",
"avatar_url": "https://avatars.githubusercontent.com/u/31217453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rafaelpadilla",
"html_url": "https://github.com/rafaelpadilla",
"followers_url": "https://api.github.com/users/rafaelpadilla/followers",
"following_url": "https://api.github.com/users/rafaelpadilla/following{/other_user}",
"gists_url": "https://api.github.com/users/rafaelpadilla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rafaelpadilla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rafaelpadilla/subscriptions",
"organizations_url": "https://api.github.com/users/rafaelpadilla/orgs",
"repos_url": "https://api.github.com/users/rafaelpadilla/repos",
"events_url": "https://api.github.com/users/rafaelpadilla/events{/privacy}",
"received_events_url": "https://api.github.com/users/rafaelpadilla/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,688 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
# What does this PR do?
Noticed that `post_process` will be replaced by `post_process_object_detection` in v5.
However, the (old) `post_process` does not threshold the bounding box scores (it has the same effect if using `threshold=0`).
But the (new) `post_process_object_detection` has a threshold parameter which, depending on the model, has different default values.
When this change occurs, users will have fewer boxes detected if the default threshold of `post_process_object_detection` is not `0`.
This PR includes:
1) Alerting the usage of threshold in existing warning messages of vision models, so when users stop calling `post_process` and start calling `post_process_object_detection`, their results will not be affected.
2) Changing `owlvit.md` as it was not making usage of the (new) `post_process_object_detection`.
I searched for other .md files and docstrings that will be affected when `post_process` stops working. But noticed that only `owlvit.md` will produce wrong results if not calling `post_process_object_detection` with the correct threshold. All others (e.g. `modeling_conditional_detr.py`, `modeling_deformable_detr.py`, `modeling_deta.py`, `modeling_detr.py`, `zero_shot_object_detection.md`, etc) already explicitly use a threshold and won't be affected.
## Before submitting
- [ x ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24649/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24649/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24649",
"html_url": "https://github.com/huggingface/transformers/pull/24649",
"diff_url": "https://github.com/huggingface/transformers/pull/24649.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24649.patch",
"merged_at": 1688500078000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24648
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24648/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24648/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24648/events
|
https://github.com/huggingface/transformers/pull/24648
| 1,788,049,034 |
PR_kwDOCUB6oc5Un99Z
| 24,648 |
Enable `conversational` pipeline for `GPTSw3Tokenizer`
|
{
"login": "saattrupdan",
"id": 47701536,
"node_id": "MDQ6VXNlcjQ3NzAxNTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/47701536?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saattrupdan",
"html_url": "https://github.com/saattrupdan",
"followers_url": "https://api.github.com/users/saattrupdan/followers",
"following_url": "https://api.github.com/users/saattrupdan/following{/other_user}",
"gists_url": "https://api.github.com/users/saattrupdan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saattrupdan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saattrupdan/subscriptions",
"organizations_url": "https://api.github.com/users/saattrupdan/orgs",
"repos_url": "https://api.github.com/users/saattrupdan/repos",
"events_url": "https://api.github.com/users/saattrupdan/events{/privacy}",
"received_events_url": "https://api.github.com/users/saattrupdan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @saattrupdan, thanks for this contribution and opening this PR.\r\n\r\nAs it stands, this isn't a change that we'd accept to be merged in. A few notes on why: \r\n* The pipelines are a higher level abstraction than the tokenizers, and so shouldn't be imported into a tokenizer's module. \r\n* The job of the tokenizer is to prepare raw text inputs for the model and decode its predicted tokens. `_build_conversation_input_ids` is higher level logic that belongs outside the class in e.g. a custom script. \r\n* It's not necessary to add `load_in_4bit` to the pipeline - the model can be instantiated with `ModelClass.from_pretrained(checkpoint, load_in_4bit=True)` and then passed into the pipeline. We try to keep the number of arguments in our public APIs as small as possible. \r\n* I think there might be a conflicting configuration, auto formatting from an IDE or different package version, but the line split changes in the PR shouldn't be there.\r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"> Hi @saattrupdan, thanks for this contribution and opening this PR.\r\n\r\nThanks for your review @amyeroberts! \r\n\r\n> * The pipelines are a higher level abstraction than the tokenizers, and so shouldn't be imported into a tokenizer's module.\r\n\r\nI've fixed this now, via @ArthurZucker's suggestion.\r\n\r\n> * The job of the tokenizer is to prepare raw text inputs for the model and decode its predicted tokens. `_build_conversation_input_ids` is higher level logic that belongs outside the class in e.g. a custom script.\r\n\r\nI'm a bit confused by this, as this method already exists for 9-10 tokenizers in the package (such as GPT2, Bloom, GPT-neox and more), and is also required by the conversational pipeline [here](https://github.com/huggingface/transformers/blob/469f4d0c29275473daf0627a0b26ec05256e47d2/src/transformers/pipelines/conversational.py#L256-L257). \r\n\r\n> * It's not necessary to add `load_in_4bit` to the pipeline - the model can be instantiated with `ModelClass.from_pretrained(checkpoint, load_in_4bit=True)` and then passed into the pipeline. We try to keep the number of arguments in our public APIs as small as possible.\r\n\r\nThat's fair enough if that's a design goal, I've removed it now. I just liked the idea of being able to instantiate a pipeline without having to load in the model first 🙂 \r\n\r\n> * I think there might be a conflicting configuration, auto formatting from an IDE or different package version, but the line split changes in the PR shouldn't be there.\r\n\r\nAh right, I just thought it was a mistake that an 88 character line limit wasn't enforced - I've reverted the changes back now I think!",
"@saattrupdan @ArthurZucker OK, my bad, I hadn't noticed the `_build_conversation_input_ids` before - happy for that to be added then :) ",
"@ArthurZucker All formatting changes have been reversed now too 🙂 ",
"Really nice! \r\n\r\nA quick comment from one of the developers of GPT-SW3, and the one responsible for the tokenization pipline.\r\n\r\nSince there's a mismatch between the huggingface tokenizer and the sentencepiece tokenizer used during training, and how they treat special tokens, I'm a bit wary of this PR as it stands right now. To better match the training-procedure, each turn should be tokenized in isolation by the underlying sp_model, and joined with <bos>-tokens. This might result in the same thing, but I'm not 100% sure :sweat_smile: \r\n\r\n ",
"Regarding the special token issue, do you have small reproducer? I can have a look if needed! Currently working on our sentencepiece compatibility issues ",
"> @Apsod Since there's a mismatch between the huggingface tokenizer and the sentencepiece tokenizer used during training, and how they treat special tokens, I'm a bit wary of this PR as it stands right now. To better match the training-procedure, each turn should be tokenized in isolation by the underlying sp_model, and joined with -tokens. This might result in the same thing, but I'm not 100% sure 😅\r\n\r\nI just did some experiments to check this. The underlying sentencepiece model cannot deal with the special tokens, since these are dealt with by the `tokens_trie`, which is used in the `tokenize` method. Here's a sanity check:\r\n\r\n```python\r\n>>> tokenizer.tokens_trie.data\r\n{'<': {'p': {'a': {'d': {'>': {'': 1}}}}, 's': {'>': {'': 1}}, 'u': {'n': {'k': {'>': {'': 1}}}}, '|': {'e': {'n': {'d': {'o': {'f': {'t': {'e': {'x': {'t': {'|': {'>': {'': 1}}}}}}}}}}}}}}\r\n```\r\n\r\nWe see that it correctly deals with `<pad>`, `<s>`, `<unk>` and `<|endoftext|>` special tokens. The `encode` method uses the `encode_plus` method, which uses the `_encode_plus` method, which finally uses the `tokenize` method, so using `encode` should be fine here, I think.\r\n\r\nNote that, in the `tokenize` method, after the special tokens have been removed using the `tokens_trie`, the underlying `_tokenize` method is used to do the actual tokenization, which is implemented in the `GPTSw3Tokenizer` as\r\n\r\n```python\r\ndef _tokenize(self, text: str, **kwargs) -> List[str]:\r\n text = self.preprocess_text(text)\r\n return self.sp_model.encode(text, out_type=str)\r\n```\r\n\r\nIf I replace the `self.encode` with `self.sp_model.encode` in the new function that's being added in this PR, then I end up with an incompatible tokenization:\r\n\r\n```python\r\n>>> tokenizer.sp_model.encode('<s>Hej med dig<|endoftext|>', out_type=str)\r\n['▁<', 's', '>', 'Hej', '▁med', '▁dig', '<', '|', 'end', 'of', 'text', '|', '>']\r\n```\r\n\r\nIf I'm completely missing the point here, @Apsod, then please let me know 🙂 ",
"> If I replace the `self.encode` with `self.sp_model.encode` in the new function that's being added in this PR, then I end up with an incompatible tokenization:\r\n> \r\n> ```python\r\n> >>> tokenizer.sp_model.encode('<s>Hej med dig<|endoftext|>', out_type=str)\r\n> ['▁<', 's', '>', 'Hej', '▁med', '▁dig', '<', '|', 'end', 'of', 'text', '|', '>']\r\n> ```\r\n\r\nThis is an edge-case where the semantic discrepancy between sentencepiece and huggingface tokenization leads to different results.\r\n\r\nIf we encounter `<|endoftext|>` in text and tokenizes this using sentencepiece (as was done during training), it would tokenize this as `<, |, end, of, text, |, >` and not as the special eos-token, since in sentencepiece, special tokens are not textual and can never be produced by tokenizing text. \r\n\r\nI think there's also differences in how sentencepice treats the initial token after a special token (due to whitespace-prefix-stuff), which leads to a general mismatch between the tokenizers: \r\n\r\n```\r\nTEXT = \"\"\"\r\n<|endoftext|><s>\r\nHej\r\n<s>\r\nHoj\r\n\"\"\".strip()\r\nprint(tokenizer.decode(tokenizer.encode(TEXT))\r\n# will print out the following:\r\n# <|endoftext|><s> Hej<s>Hoj\r\n```\r\n\r\nEDIT:\r\n\r\nA simpler example of weird interactions between whitespace and special tokens:\r\n```\r\nTEXT = \"\"\" Hej <s>\"\"\"\r\n\r\nprint('\"', TEXT, '\"', sep='')\r\nprint('\"', tokenizer.decode(tokenizer.encode(TEXT)), '\"', sep='')\r\n```\r\nResults in: \r\n\r\n```\r\n\" Hej <s>\"\r\n\" Hej<s>\"\r\n```",
"@Apsod Thanks for the clarification. Just tried inspecting the result of using the `encode` method, and it removes some of the newline symbols. More specifically,\r\n\r\n```python\r\nprompt = \"<|endoftext|><s>\\nUser:\\nJag tycker träd är fina\\n<s>\\nBot:\\n\"\r\n```\r\n\r\nis being tokenised as `[3, 2, 15088, 63458, 18, 3947, 1886, 7590, 377, 6173, 2, 22493, 63458, 18]`, which translates token-by-token to \"<|endoftext|>\\<s\\>User:\\nJag tycker träd är fina\\<s\\>Bot:\\n\". Notably, all newlines adjacent to a BOS token have been removed when encoded with this method.\r\n\r\nI have been chatting to Amaru from the AI Sweden team (which might be you @Apsod? User names are always confusing!), and he said that they actually used multiple different prompts, sampled stochastically during training:\r\n\r\n```\r\n<eos><bos>{A}User:{B}{Query}{C}<bos>{A}Bot:{B}{Response}{C}...\r\nA ~ [\"\\n\", \"\"]\r\nB ~ [\"\\n\", \" \"]\r\nC ~ [\"\\n\", \"\"]\r\n```\r\n\r\nWith this flexibility in mind, I propose that we change the above prompt to the following:\r\n\r\n```python\r\nprompt = \"<|endoftext|><s>User: Jag tycker träd är fina<s>Bot: \"\r\n```\r\n\r\nI compared the encodings of the `encode` and `sp_model.encode` methods, and they now yield equivalent tokens. Here's the code that I ran to check:\r\n\r\n```python\r\nall_responses_encoded = [self.sp_model.encode(response) for response in all_responses]\r\nsp_encoded_prompt = [self.eos_token_id, self.bos_token_id]\r\nfor response in all_responses_encoded:\r\n sp_encoded_prompt += response + [self.bos_token_id]\r\nsp_encoded_prompt += self.sp_model.encode(\"Bot: \")\r\n\r\nprompt = (\r\n f\"{self.eos_token}{self.bos_token}\"\r\n + f\"{self.bos_token}\".join(all_responses)\r\n + f\"{self.bos_token}Bot: \"\r\n)\r\nhf_encoded_prompt = self.encode(text=prompt)\r\n\r\nassert sp_encoded_prompt == hf_encoded_prompt\r\n```\r\n\r\nAnother thing: I looked into the mysterious extra whitespace added during decoding, and found that it's all due to these two lines in the `GPTSw3Tokenizer.convert_tokens_to_string` method ([link](https://github.com/huggingface/transformers/blob/66a378429d0e085e4e72bc63a4147889a3b65a14/src/transformers/models/gpt_sw3/tokenization_gpt_sw3.py#L233-L234)):\r\n```\r\nif not prev_is_special:\r\n out_string += \" \"\r\n```\r\n\r\nIs there any reason for this, or should it just be removed to ensure that `tokenizer.decode(tokenizer.encode(doc)) == doc`?",
"Looks good to me! \r\nThe only outstanding issue then is special-token-injection, but I guess that is a more general HF-issue? ",
"> Looks good to me! The only outstanding issue then is special-token-injection, but I guess that is a more general HF-issue?\r\n\r\n@Apsod Great. I've changed the prompt now. I also added a TODO comment to clarify whether [these two lines](https://github.com/huggingface/transformers/blob/66a378429d0e085e4e72bc63a4147889a3b65a14/src/transformers/models/gpt_sw3/tokenization_gpt_sw3.py#L233-L234) are needed, as they break the decode(encode(doc)) == doc consistency. But that can be dealt with in another PR, if needed.",
"@amyeroberts @ArthurZucker I cannot seem to merge in this PR - do any of you need to re-approve it first?",
"@saattrupdan Yes, the branch is protected so that only certain people can merge. It also needs an approval from a core maintainer (me in this case :) )\r\n\r\nMerging for you now. Thanks again for this contribution! ",
"Also regarding why spaces before / after special tokens is eating in the slow version of transformers:\r\n- `add_tokens` does not support changing `lstrip` and `rstrip` thus by default it will strip. A fix is on its way here #23909 \r\n- text after special tokens is not properly handled. This leads to addition of spaces. A fix is also on its way for T5 and Llama but should be pushed to all `spm` based models. #24622 "
] | 1,688 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
# What does this PR do?
The `ConversationalPipeline` is great for easily running dialogue models, and also enables smooth interfaces in the associated Hugging Face Hub widget. These seem to require a `_build_conversation_input_ids` method on the associated tokenizer, however, which takes a `Conversation` object and encodes it into the chat format that the model was trained on.
With this change, we can now easily use the GPT-SW3 models. Here's an example of asking a single question:
```python
from transformers import pipeline, Conversation
chatbot = pipeline(model="AI-Sweden-Models/gpt-sw3-20b-instruct")
conversation = chatbot(Conversation("Hvad hedder du?"))
output = conversation.generated_responses[-1]
print(output)
```
And here is an example with a never-ending multi-turn dialogue session:
```python
from transformers import pipeline, Conversation
chatbot = pipeline(model="AI-Sweden-Models/gpt-sw3-20b-instruct")
conversation = Conversation()
while True:
user_input = input('> ')
conversation.add_user_input(user_input)
conversation = chatbot(conversation)
output = conversation.generated_responses[-1]
print(output)
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@Narsil @ArthurZucker @YouJiacheng @ekgren
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24648/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24648/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24648",
"html_url": "https://github.com/huggingface/transformers/pull/24648",
"diff_url": "https://github.com/huggingface/transformers/pull/24648.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24648.patch",
"merged_at": 1688755942000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24647
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24647/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24647/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24647/events
|
https://github.com/huggingface/transformers/pull/24647
| 1,787,952,854 |
PR_kwDOCUB6oc5Uno0G
| 24,647 |
documentation_tests.txt - sort filenames alphabetically
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ydshieh I added a quick check in `utils/check_doctest_list.py` - let me know what you think :)",
"Yes! Eventually, we can provide a fix option to modify the file to sort and deduplicate the lines. But again, the PR itself is complete and can be merged already."
] | 1,688 | 1,688 | 1,688 |
COLLABORATOR
| null |
# What does this PR do?
Reorganises the file names listed in `documentation_tests.txt` so that they are in alphabetical order. This is to address two things:
* Make it obvious where to add new files
* Make it easier to spot if certain files are missing. For example, I didn't notice until recently that modleing_imagegpt.py wasn't included (its config was).
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24647/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24647/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24647",
"html_url": "https://github.com/huggingface/transformers/pull/24647",
"diff_url": "https://github.com/huggingface/transformers/pull/24647.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24647.patch",
"merged_at": 1688486765000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24646
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24646/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24646/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24646/events
|
https://github.com/huggingface/transformers/issues/24646
| 1,787,932,338 |
I_kwDOCUB6oc5qka6y
| 24,646 |
TrainingArguments.report_to is not configured as documented
|
{
"login": "wilke0818",
"id": 39885245,
"node_id": "MDQ6VXNlcjM5ODg1MjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/39885245?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wilke0818",
"html_url": "https://github.com/wilke0818",
"followers_url": "https://api.github.com/users/wilke0818/followers",
"following_url": "https://api.github.com/users/wilke0818/following{/other_user}",
"gists_url": "https://api.github.com/users/wilke0818/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wilke0818/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wilke0818/subscriptions",
"organizations_url": "https://api.github.com/users/wilke0818/orgs",
"repos_url": "https://api.github.com/users/wilke0818/repos",
"events_url": "https://api.github.com/users/wilke0818/events{/privacy}",
"received_events_url": "https://api.github.com/users/wilke0818/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi!\r\n\r\nThe actual default value is `None`, but it is set to `all` in this case, so it still corresponds to the doc.\r\n\r\nSee\r\n\r\nhttps://github.com/huggingface/transformers/blob/cd4584e3c809bb9e1392ccd3fe38b40daba5519a/src/transformers/training_args.py#L1422-L1428",
"Hi, I appreciate you taking the time! Not sure how I missed that piece of code. I also realized the reason my code wasn't working was related to how remove_callback works when sending an instance vs. the type.",
"Hi - I had a question about this. It still is confusing to me that report_to = None is switched to \"all\". The docstring seems to suggest that will change, but the current version still has this behavior. Seems like a small thing, but I'm not sure I understand why None gets translated in this way."
] | 1,688 | 1,704 | 1,688 |
NONE
| null |
### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.1.0.dev20230616 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Distributed?
### Who can help?
@stevhliu
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
https://github.com/wilke0818/i3_speech_emotion_recognition/ - have code on here that examples creating a Trainer using different training arguments with the default report_to value.
Generally, creating even a basic TrainingArguments instance and passing it to any Trainer instance and then running trainer.remove_callback(WandbCallback()) will error saying that the callback is not there. This is actually what I want however this goes against the set documentation.
See documentation here: https://github.com/huggingface/transformers/blob/cd4584e3c809bb9e1392ccd3fe38b40daba5519a/src/transformers/training_args.py#L499
The actual default value here: https://github.com/huggingface/transformers/blob/cd4584e3c809bb9e1392ccd3fe38b40daba5519a/src/transformers/training_args.py#L1030
Which is then used in Trainer instantiation: https://github.com/huggingface/transformers/blob/cd4584e3c809bb9e1392ccd3fe38b40daba5519a/src/transformers/trainer.py#L539
Which finally gives us that no report_to's are used: https://github.com/huggingface/transformers/blob/cd4584e3c809bb9e1392ccd3fe38b40daba5519a/src/transformers/integrations.py#L1613
### Expected behavior
Based on the documentation I would expect that when setting up a trainer all installed Callback packages for reporting will be used unless the user specifies otherwise.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24646/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24646/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24645
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24645/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24645/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24645/events
|
https://github.com/huggingface/transformers/pull/24645
| 1,787,774,637 |
PR_kwDOCUB6oc5UnCHw
| 24,645 |
[WIP] Add LaVIN
|
{
"login": "shauray8",
"id": 39147312,
"node_id": "MDQ6VXNlcjM5MTQ3MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/39147312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shauray8",
"html_url": "https://github.com/shauray8",
"followers_url": "https://api.github.com/users/shauray8/followers",
"following_url": "https://api.github.com/users/shauray8/following{/other_user}",
"gists_url": "https://api.github.com/users/shauray8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shauray8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shauray8/subscriptions",
"organizations_url": "https://api.github.com/users/shauray8/orgs",
"repos_url": "https://api.github.com/users/shauray8/repos",
"events_url": "https://api.github.com/users/shauray8/events{/privacy}",
"received_events_url": "https://api.github.com/users/shauray8/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @shauray8, thanks for opening this PR! \r\n\r\nThe easiest and recommended way to make a model available in `transformers` is to add the modeling code directly on the hub: https://huggingface.co/docs/transformers/custom_models\r\n\r\nThis means, once working, the model can be found and used immediately without having to go through the PR process. We find this is a lot quicker as the bar for adding code into the library is high due to the maintenance cost of every new model, and so reviews take quite a while.",
"Hi @amyeroberts, That makes sense, I have not seen a lot of people use this particular model. I'll make all the necessary changes and add it to the hub. But if there's anything I can help with to improve HuggingFace I'm more than happy to do it."
] | 1,688 | 1,689 | 1,689 |
CONTRIBUTOR
| null |
# What does this PR do?
Adds the LaVIN model from - https://arxiv.org/pdf/2305.15023.pdf <br>
Model description - LaVIN is a vision-language instructed model that is affordable to train (it was trained in a few hours on 8 A100 GPUs) with good performance on ScienceQA.
Fixes issue #23846
## Who can review?
Models:
@amyeroberts @ArthurZucker
** Draft ** (Maintainers and reviewers can go through the PR as and when needed, I will ping the reviewers once the PR is ready. Guidance/Questions/Concerns related to the PR are always welcome.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24645/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24645/timeline
| null | true |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24645",
"html_url": "https://github.com/huggingface/transformers/pull/24645",
"diff_url": "https://github.com/huggingface/transformers/pull/24645.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24645.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24644
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24644/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24644/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24644/events
|
https://github.com/huggingface/transformers/issues/24644
| 1,787,697,426 |
I_kwDOCUB6oc5qjhkS
| 24,644 |
'eos_token_id' for llama model.generate is not working
|
{
"login": "devymex",
"id": 1797836,
"node_id": "MDQ6VXNlcjE3OTc4MzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1797836?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/devymex",
"html_url": "https://github.com/devymex",
"followers_url": "https://api.github.com/users/devymex/followers",
"following_url": "https://api.github.com/users/devymex/following{/other_user}",
"gists_url": "https://api.github.com/users/devymex/gists{/gist_id}",
"starred_url": "https://api.github.com/users/devymex/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/devymex/subscriptions",
"organizations_url": "https://api.github.com/users/devymex/orgs",
"repos_url": "https://api.github.com/users/devymex/repos",
"events_url": "https://api.github.com/users/devymex/events{/privacy}",
"received_events_url": "https://api.github.com/users/devymex/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @ArthurZucker ",
"Hey! A few things to note: \r\n- `LlamaTokenizerFast` (which you are using through the `AutoTokenizer` API) has been fixed here #24042, addressing the issue with special tokens being encode. \r\n- You are not sharing any repo, so we can't reproduce potential bugs. \r\n- `it always ignores the </s> as the ending token ` what does that mean? Does the generation not stop? Then have a look here #22794. \r\n- `skip_special_tokens` will work if you have the correct version of LlamaTokenizer. \r\n- If you wish to add the ending token in your prompt, set `add_eos_token` to `True`. It will be done automatically\r\n\r\nHere is a working snippet:\r\n```python \r\nfrom transformers import LlamaTokenizer, AutoModelForCausalLM, AutoTokenizer\r\nweights_dir = \"huggyllama/llama-7b\"\r\nquestion = 'Hello, there!'\r\n\r\n# if you want to add eos, set `add_eos_token=True`\r\ntokenizer = LlamaTokenizer.from_pretrained(weights_dir, add_eos_token=True)\r\nquestion_ids = tokenizer.encode(question, return_tensors='pt')\r\nprint(question_ids)\r\n# tensor([[ 1, 15043, 29892, 727, 29991, 2]])\r\nprint( tokenizer.decode(question_ids[0], skip_special_tokens = True))\r\n# 'Hello, there!'\r\n\r\n\r\n# if you are not using the correct version of tokenizer, special tokens are wrong\r\ntokenizer = AutoTokenizer.from_pretrained(weights_dir, add_eos_token=True)\r\nprint(tokenizer.is_fast)\r\n# True\r\nquestion_ids = tokenizer.encode('Hello, there!</s>', return_tensors='pt')\r\nprint(question_ids)\r\n# tensor([[ 1, 15043, 29892, 727, 29991, 829, 29879, 29958, 2]])\r\nquestion_ids = tokenizer.encode('Hello, there! </s>', return_tensors='pt')\r\n# tensor([[ 1, 15043, 29892, 727, 29991, 2, 2]])\r\nprint(question_ids)\r\n```",
"@ArthurZucker Many thanks! `add_eos_token=True` did the trick!"
] | 1,688 | 1,688 | 1,688 |
NONE
| null |
### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.4.0-137-generic-x86_64-with-glibc2.31
- Python version: 3.10.0
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import transformers, torch
weights_dir = "weights/recovered"
question = 'Hello, there!'
model = transformers.AutoModelForCausalLM.from_pretrained(weights_dir)
model = model.cuda()
print(model.config)
# LlamaConfig {
# "_name_or_path": "weights/recovered",
# "architectures": [
# "LlamaForCausalLM"
# ],
# "bos_token_id": 1,
# "eos_token_id": 2,
# "hidden_act": "silu",
# "hidden_size": 4096,
# "initializer_range": 0.02,
# "intermediate_size": 11008,
# "max_position_embeddings": 2048,
# "model_type": "llama",
# "num_attention_heads": 32,
# "num_hidden_layers": 32,
# "pad_token_id": 0,
# "rms_norm_eps": 1e-06,
# "tie_word_embeddings": false,
# "torch_dtype": "float32",
# "transformers_version": "4.30.2",
# "use_cache": true,
# "vocab_size": 32001
# }
tokenizer = transformers.AutoTokenizer.from_pretrained(weights_dir)
question_ids = tokenizer.encode(question + tokenizer.eos_token, return_tensors='pt')
question_ids = question_ids.cuda()
print(tokenizer.eos_token_id, tokenizer.bos_token_id, tokenizer.pad_token_id)
# 2, 1, 32000
print(question_ids)
# tensor([[ 1, 15043, 29892, 727, 29991, 829, 29879, 29958]],
device='cuda:0')
print(tokenizer.decode(question_ids[0]))
# <s> Hello, there!</s>
outputs = model.generate(
question_ids,
eos_token_id=2,
max_new_tokens=200,
num_beams=4,
num_return_sequences=2,
early_stopping=True
)
answer = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(answer)
# Hello, there!</s>
# Hello, there!</s>
# <s>Hello, there!</s>
```
No matter how I changing the parameters of model.generate, it always ignores the `</s>` as the ending token (id:2).
In addition, the `skip_special_tokens` of tokenizer is not working too.
Where am I doing wrong? Please help, many thanks!
### Expected behavior
The `model.generate` stop at the first time of `</s>`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24644/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24644/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24643
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24643/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24643/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24643/events
|
https://github.com/huggingface/transformers/issues/24643
| 1,787,616,386 |
I_kwDOCUB6oc5qjNyC
| 24,643 |
"RuntimeError: 'weight' must be 2-D" training with DeepSpeed
|
{
"login": "ZizoAdam",
"id": 124168668,
"node_id": "U_kgDOB2ap3A",
"avatar_url": "https://avatars.githubusercontent.com/u/124168668?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZizoAdam",
"html_url": "https://github.com/ZizoAdam",
"followers_url": "https://api.github.com/users/ZizoAdam/followers",
"following_url": "https://api.github.com/users/ZizoAdam/following{/other_user}",
"gists_url": "https://api.github.com/users/ZizoAdam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZizoAdam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZizoAdam/subscriptions",
"organizations_url": "https://api.github.com/users/ZizoAdam/orgs",
"repos_url": "https://api.github.com/users/ZizoAdam/repos",
"events_url": "https://api.github.com/users/ZizoAdam/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZizoAdam/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 5616426447,
"node_id": "LA_kwDOCUB6oc8AAAABTsPdzw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/solved",
"name": "solved",
"color": "B1D6DC",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] |
[
"Hi\r\n\r\nWhile waiting @pacman100 's comment maybe , you can check what's the shape of `self.wte`. It would be a good idea to double check if the issue also happens without the usage of deepspeed.\r\n\r\n\r\n```\r\n File \"/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/transformers/models/gptj/modeling_gptj.py\", line 634, in forward\r\n inputs_embeds = self.wte(input_ids)\r\n```",
"The issue does not happen without deepspeed, however we are unable to train without deepspeed due to not having much in the way of system resources.",
"DeepSpeed version and how are you launching the script?",
"Deepspeed 0.9.5, just launching it with ```python3 script.py```",
"Thought so, please use distributed launcher such as `torchrun`, `deepspeed` or `accelerate` when using DeepSpeed/DDP/FSDP or anytime you are doing distributed training. \r\n\r\nPlease refer: \r\n1. https://huggingface.co/docs/transformers/main_classes/deepspeed#deployment-with-multiple-gpus\r\n2. https://huggingface.co/docs/transformers/main/en/main_classes/trainer#using-accelerate-launcher-with-trainer\r\n",
"that should resolve the issue\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"i also have the same problem. also deepspeed stage3 with trainner. @ZizoAdam do u solve the problem?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@yuxyang88 if the solution did not work for you, feel free to open a new issue with a reproducer (as small as possible) making sure you are using the lastest version of transformers.",
"> 因此,请使用分布式启动器,例如`torchrun`,`deepspeed`或`accelerate`在使用 DeepSpeed/DDP/FSDP 时或在进行分布式训练时使用。\r\n> \r\n> 请参考:\r\n> \r\n> 1. https://huggingface.co/docs/transformers/main_classes/deepspeed#deployment-with-multiple-gpus\r\n> 2. https://huggingface.co/docs/transformers/main/en/main_classes/trainer#using-accelerate-launcher-with-trainer\r\n\r\nMy program reported the same error (`RuntimeError: 'weight' must be 2-D`), but I started the distributed training with deepspeed, I do not understand your answer, why do you think it can solve the problem?",
"Hi @nomadlx\r\n\r\nPlease open a new issue with a reproducer (**as small as possible but complete**). \r\n\r\nAlso making sure you are using the lastest version of transformers / accelerate too.\r\n\r\nThanks.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"我也遇到了相同问题,在我将transformers的版本从4.35.0换到4.31.0之后问题解决了",
"I got the same issue even downgraded transformers from 4.35.0 to 4.31.0 as [Hagtaril](https://github.com/Hagtaril) commented, with deepspeed. Anyone resolved the issue? My deedspeed version is 0.10.0. It worked well without deepspeed.",
"> I got the same issue even downgraded transformers from 4.35.0 to 4.31.0 as [Hagtaril](https://github.com/Hagtaril) commented, with deepspeed. Anyone resolved the issue? My deedspeed version is 0.10.0. It worked well without deepspeed.\r\n\r\nI got a same issue and worked it out after a day. I got the issue when training DPO and PPO with Huggingface trl library. The cause of these errors roots in incorrrect initialization of deepspeed for your model. To solve this issue, you can double-check:\r\n\r\n1) Make sure calling `deepspeed` correcty (e.g. `deepspeed --num_gpus <> --master_port=<> xxx.py`when launching the training job. This should solve most of the cases if you are just training a single model.\r\n\r\n2) For trickier scenerios (training DPO or PPO), please make sure ALL models are correctly initialized with deepspeed. Huggingface's TRL library have some bugs in initializing deepspeed for the reference model, reward model, etc. So, it is safect to initialize each model with `from_pretrained` before passing to Huggingface trainer classes. On the contrary, initializing reference models with TRL or `copy.deepcopy()` all yields incorrect deepspeed initializations. You may see error like this:\r\n- `Tensors must be 2-D`\r\n- `AssertionError: {'id': 291, 'status': 'NOT_AVAILABLE', 'numel': 0, 'ds_numel': 0, 'shape': (0,), 'ds_shape': (0,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {456}, 'ds_tensor.shape': torch.Size([0])}\r\n: {'id': 291, 'status': 'NOT_AVAILABLE', 'numel': 0, 'ds_numel': 0, 'shape': (0,), 'ds_shape': (0,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_m`\r\n\r\n3) These errors above cannot be solved with a downgrade to 4.31.0. Also, I personally do not think downgrading as a good solution, as we will depend on new architectures and features (e.g. MistrialForCausalLM) in the future versions.",
"I got the `\"weight\" must be 2-D\"` issue using zero 3 with the TRL library to do DPO. I was also using the PEFT library to add two LoRA adapters to the model (one for the reference and one for the trained model). \r\n\r\nSolution: I removed the embedding layer as a target module in the LoRA configs and it worked. I'm not sure why, but since the stack trace had \r\n\r\n```\r\nFile \"/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/torch/nn/functional.py\", line 2210, in embedding\r\n return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\n```\r\n \r\nI just tried removing it"
] | 1,688 | 1,706 | 1,697 |
NONE
| null |
### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.19.0-46-generic-x86_64-with-glibc2.35
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@pacman100 @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The dataset being used is my own dataset that is just a few hundred strings in a CSV file produced by pandas.
Running the following code
```Python
from transformers import GPTJForCausalLM, AutoTokenizer, Trainer, TrainingArguments, DataCollatorForLanguageModeling
import os
from torch.utils.data import Dataset
import pandas as pd
import evaluate
import numpy as np
import sklearn
import torch as nn
from transformers.trainer_pt_utils import get_parameter_names
model_name = "EleutherAI/gpt-j-6b"
d_type = "auto"
print("CUDA Available: "+ str(nn.cuda.is_available()))
print("CUDA Version: " + str(nn.version.cuda))
print("GPUs Available: "+ str(nn.cuda.device_count()))
def process_csv(filename, tknizer):
data = pd.read_csv(filename)
return tknizer(list(data["text"].values.flatten()), padding=True, truncation=True, return_tensors="pt")
tokenizer = AutoTokenizer.from_pretrained(model_name, torch_dtype=d_type)
collator = DataCollatorForLanguageModeling(tokenizer, mlm=False)
tokenizer.pad_token = tokenizer.eos_token
class MyDataset(Dataset):
def __init__(self, tokenized_input):
self.tokenized_input = tokenized_input
def __getitem__(self, idx):
return {key: val[idx] for key, val in self.tokenized_input.items()}
def __len__(self):
return len(self.tokenized_input.input_ids)
metric = evaluate.load("accuracy")
def compute_metrics(eval_pred):
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
train_data = MyDataset(process_csv("train_data.csv", tokenizer))
eval_data = MyDataset(process_csv("test_data.csv", tokenizer))
training_args = TrainingArguments(
output_dir="test_trainer",
deepspeed="deepSpeedCPU.json",
)
model = GPTJForCausalLM.from_pretrained(model_name, torch_dtype=d_type).cuda()
print("Total Memory: " + str(nn.cuda.get_device_properties(0).total_memory))
print("Reserved: " + str(nn.cuda.memory_reserved(0)))
print("Allocated: " + str(nn.cuda.memory_allocated(0)))
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_data,
eval_dataset=eval_data,
data_collator=collator,
compute_metrics=compute_metrics,
)
trainer.train()
```
using the following config file
```
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
Causes an error at trainer.train()
```
Traceback (most recent call last):
File "/home/augustus/ADAM/main2.py", line 82, in <module>
trainer.train()
File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/transformers/trainer.py", line 1645, in train
return inner_training_loop(
File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/transformers/trainer.py", line 1938, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/transformers/trainer.py", line 2759, in training_step
loss = self.compute_loss(model, inputs)
File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/transformers/trainer.py", line 2784, in compute_loss
outputs = model(**inputs)
File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/transformers/models/gptj/modeling_gptj.py", line 854, in forward
transformer_outputs = self.transformer(
File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/transformers/models/gptj/modeling_gptj.py", line 634, in forward
inputs_embeds = self.wte(input_ids)
File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 162, in forward
return F.embedding(
File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/torch/nn/functional.py", line 2210, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: 'weight' must be 2-D
```
### Expected behavior
I would expect training to begin or a more verbose error to help fix the issue (if possible to do so from my side)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24643/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24643/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24642
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24642/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24642/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24642/events
|
https://github.com/huggingface/transformers/issues/24642
| 1,787,552,309 |
I_kwDOCUB6oc5qi-I1
| 24,642 |
openlm-research/open_llama_13b_easylm cannot be downloaded
|
{
"login": "leweex95",
"id": 74991597,
"node_id": "MDQ6VXNlcjc0OTkxNTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/74991597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leweex95",
"html_url": "https://github.com/leweex95",
"followers_url": "https://api.github.com/users/leweex95/followers",
"following_url": "https://api.github.com/users/leweex95/following{/other_user}",
"gists_url": "https://api.github.com/users/leweex95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leweex95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leweex95/subscriptions",
"organizations_url": "https://api.github.com/users/leweex95/orgs",
"repos_url": "https://api.github.com/users/leweex95/repos",
"events_url": "https://api.github.com/users/leweex95/events{/privacy}",
"received_events_url": "https://api.github.com/users/leweex95/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Looking at \r\n\r\nhttps://huggingface.co/openlm-research/open_llama_13b_easylm/tree/main\r\n\r\nThe file doesn't seem to be torch bin file. \r\n\r\nHowever, https://huggingface.co/openlm-research/open_llama_13b/tree/main has those `.bin` files.\r\n\r\nYou will have to open an issue on that Hub repo. to discuss with the repo. owner.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,691 | 1,691 |
NONE
| null |
### System Info
transformers: 4.30.2.
Python: 3.9.17
OS: MacOS Ventura 13.3.1 (a)
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Code to reproduce:
```
model_id = "openlm-research/open_llama_13b_easylm"
model_name = model_id.split("/")[1]
model = pipeline(model=model_id)
model.save_pretrained(f"./models/{model_name}")
```
### Expected behavior
I expect the model to be downloadable locally for use in downstream NLP tasks. Noteworthy is that on the [website](https://huggingface.co/openlm-research/open_llama_13b_easylm) it is indicated that 0 people downloaded this model over the past month.
With the above script, I can easily fetch other models such as `"openlm-research/open_llama_13b"`:
```
model_id = "openlm-research/open_llama_13b"
model_name = model_id.split("/")[1]
model = pipeline(model=model_id)
model.save_pretrained(f"./models/{model_name}")
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24642/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24642/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24641
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24641/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24641/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24641/events
|
https://github.com/huggingface/transformers/issues/24641
| 1,787,134,396 |
I_kwDOCUB6oc5qhYG8
| 24,641 |
AssertionError: Dynamo only supports FSDP with use_orig_params=True
|
{
"login": "ari9dam",
"id": 14134882,
"node_id": "MDQ6VXNlcjE0MTM0ODgy",
"avatar_url": "https://avatars.githubusercontent.com/u/14134882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ari9dam",
"html_url": "https://github.com/ari9dam",
"followers_url": "https://api.github.com/users/ari9dam/followers",
"following_url": "https://api.github.com/users/ari9dam/following{/other_user}",
"gists_url": "https://api.github.com/users/ari9dam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ari9dam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ari9dam/subscriptions",
"organizations_url": "https://api.github.com/users/ari9dam/orgs",
"repos_url": "https://api.github.com/users/ari9dam/repos",
"events_url": "https://api.github.com/users/ari9dam/events{/privacy}",
"received_events_url": "https://api.github.com/users/ari9dam/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The command involves `accelerate` and `torch_compile` + the error involves `Dynamo`.\r\n\r\ncc @fxmarty @pacman100 (maybe?)",
"Thank you for the issue, the above PR fixes it.",
"@pacman100 I'm still seeing this error I can't find this [commit](https://github.com/huggingface/transformers/commit/66a378429d0e085e4e72bc63a4147889a3b65a14) in the current version of train.py in main branch of transformers. Does the Trainer still support torch.compile along with FSDP?"
] | 1,688 | 1,696 | 1,688 |
NONE
| null |
### System Info
cuda 11.7
accelerate=0.21.0.dev0
transformers=4.31.0.dev0
torch=2.0.1
python=3.8
### Who can help?
@pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
### command:
accelerate launch --config_file accelerate_config.yaml --num_machines 7 --num_processes 28 --machine_rank $NODE_RANK --main_process_ip $MASTER_ADDR --main_process_port $MASTER_PORT ./trainer.py --model_name_or_path ".." --data_path ".." --per_device_train_batch_size 24 --per_device_eval_batch_size 24 --do_train --evaluation_strategy no --output_dir outputs --learning_rate 2e-5 --num_train_epochs 4 --lr_scheduler_type cosine --warmup_ratio 0.03 --weight_decay 0.0 --logging_steps 1 --save_strategy epoch --bf16 true --tf32 true --load_best_model_at_end false --model_max_length 2048 --gradient_checkpointing true --save_total_limit 1 --model_resume_from_checkpoint false --torch_compile true
### accelerate_config.yaml
```
compute_environment: LOCAL_MACHINE
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch_policy: BACKWARD_PRE
fsdp_forward_prefetch: false
fsdp_offload_params: false
fsdp_sharding_strategy: 1
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sync_module_states: true
fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
fsdp_use_orig_params: true
main_training_function: main
num_machines: 1
num_processes: 2
mixed_precision: bf16
rdzv_backend: static
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
### basic training code
```
def train():
print("Env Variables")
env_vars = os.environ
for key, value in env_vars.items():
print(key, "=", value)
parser = transformers.HfArgumentParser(
(ModelArguments, DataArguments, TrainingArguments))
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
model = transformers.LlamaForCausalLM.from_pretrained(
model_args.model_name_or_path,
cache_dir=training_args.cache_dir,
)
tokenizer = transformers.AutoTokenizer.from_pretrained(
model_args.model_name_or_path,
cache_dir=training_args.cache_dir,
model_max_length=training_args.model_max_length,
padding_side="right"
)
if tokenizer.pad_token is None:
smart_tokenizer_and_embedding_resize(
tokenizer=tokenizer,
model=model,
)
data_module = make_hf_data_module(tokenizer=tokenizer,
data_args=data_args)
trainer = Trainer(model=model,
tokenizer=tokenizer,
args=training_args,
**data_module)
if model_args.model_resume_from_checkpoint:
trainer.train(resume_from_checkpoint=model_args.model_name_or_path)
else:
trainer.train()
trainer.save_state()
safe_save_model_for_hf_trainer(trainer=trainer,
output_dir=training_args.output_dir)
```
###Stacktrace
> You can suppress this exception and fall back to eager by setting:
> torch._dynamo.config.suppress_errors = True
> self.symbolic_locals = collections.OrderedDict(
> File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 1673, in <genexpr>
> self.symbolic_locals = collections.OrderedDict(
> File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 1673, in <genexpr>
> VariableBuilder(
> File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/_dynamo/variables/builder.py", line 172, in __call__
> return self._wrap(value).clone(**self.options())
> File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/_dynamo/variables/builder.py", line 248, in _wrap
> VariableBuilder(
> File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/_dynamo/variables/builder.py", line 172, in __call__
> output = [
> File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/_dynamo/variables/builder.py", line 249, in <listcomp>
> return self._wrap(value).clone(**self.options())
> File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/_dynamo/variables/builder.py", line 248, in _wrap
> VariableBuilder(self.tx, GetItemSource(self.get_source(), i))(
> File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/_dynamo/variables/builder.py", line 172, in __call__
> output = [
> File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/_dynamo/variables/builder.py", line 249, in <listcomp>
> return self._wrap(value).clone(**self.options())
> File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/_dynamo/variables/builder.py", line 345, in _wrap
> VariableBuilder(self.tx, GetItemSource(self.get_source(), i))(
> File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/_dynamo/variables/builder.py", line 172, in __call__
> assert getattr(
> AssertionError: Dynamo only supports FSDP with use_orig_params=True
>
> Set torch._dynamo.config.verbose=True for more information
### Expected behavior
torch.compile works smoothly with FSDP
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24641/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24641/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24640
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24640/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24640/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24640/events
|
https://github.com/huggingface/transformers/issues/24640
| 1,786,838,371 |
I_kwDOCUB6oc5qgP1j
| 24,640 |
'DummyOptim' object has no attribute 'step'
|
{
"login": "karths8",
"id": 47289950,
"node_id": "MDQ6VXNlcjQ3Mjg5OTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/47289950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/karths8",
"html_url": "https://github.com/karths8",
"followers_url": "https://api.github.com/users/karths8/followers",
"following_url": "https://api.github.com/users/karths8/following{/other_user}",
"gists_url": "https://api.github.com/users/karths8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/karths8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/karths8/subscriptions",
"organizations_url": "https://api.github.com/users/karths8/orgs",
"repos_url": "https://api.github.com/users/karths8/repos",
"events_url": "https://api.github.com/users/karths8/events{/privacy}",
"received_events_url": "https://api.github.com/users/karths8/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @pacman100 ",
"Hello, please provide a minimal reproducible example that we can directly run. Providing links to scripts and dataset doesn't help and is very involved and time taking. ",
"Also, please provide the accelerate and DeepSpeed versions, the launch command for the minimal example and the minimal example as mentioned above.",
"Code:\r\n```\r\n\"\"\"\r\nFinetune CodeT5+ models on instruction tuning data\r\nYou can customize your own training data by following the HF dataset format to cache it to args.cache_data\r\nAuthor: Yue Wang\r\nDate: June 2023\r\n\"\"\"\r\n\r\nimport os\r\nimport pprint\r\nimport argparse\r\nimport numpy as np\r\nimport copy\r\nimport torch\r\nfrom datasets import load_dataset, load_from_disk\r\nfrom transformers import AutoModelForSeq2SeqLM, AutoTokenizer, TrainingArguments, Trainer\r\n\r\nPROMPT_DICT = {\r\n \"prompt_input\": (\r\n \"Below is an instruction that describes a task, paired with an input that provides further context. \"\r\n \"Write a response that appropriately completes the request.\\n\\n\"\r\n \"### Instruction:\\n{instruction}\\n\\n### Input:\\n{input}\\n\\n### Response:\"\r\n ),\r\n \"prompt_no_input\": (\r\n \"Below is an instruction that describes a task. \"\r\n \"Write a response that appropriately completes the request.\\n\\n\"\r\n \"### Instruction:\\n{instruction}\\n\\n### Response:\"\r\n ),\r\n}\r\n\r\n\r\ndef get_model_size(model):\r\n model_parameters = filter(lambda p: p.requires_grad, model.parameters())\r\n model_size = sum([np.prod(p.size()) for p in model_parameters])\r\n return \"{}M\".format(round(model_size / 1e+6))\r\n\r\n\r\ndef freeze_decoder_except_xattn_codegen(model):\r\n print(f'Para before freezing: {model.num_parameters()}, trainable para: {get_model_size(model)}')\r\n for param in model.decoder.parameters():\r\n param.requires_grad = False\r\n\r\n num_decoder_layers = model.decoder.config.num_layers\r\n for i in range(num_decoder_layers):\r\n each_decoder_layer = model.decoder.transformer.h[i]\r\n if hasattr(each_decoder_layer, 'crossattention'):\r\n for param in each_decoder_layer.crossattention.parameters():\r\n param.requires_grad = True\r\n each_decoder_layer.crossattention.to(torch.float32)\r\n\r\n if hasattr(each_decoder_layer, 'alpha_xattn'):\r\n each_decoder_layer.alpha_xattn.requires_grad = True\r\n print(f'Para after freezing: {model.num_parameters()}, trainable para: {get_model_size(model)}')\r\n\r\n\r\ndef run_training(args, model, train_data):\r\n print(f\"Starting main loop\")\r\n\r\n training_args = TrainingArguments(\r\n #report_to='tensorboard',\r\n output_dir=args.save_dir,\r\n overwrite_output_dir=False,\r\n\r\n do_train=True,\r\n save_strategy='epoch',\r\n\r\n num_train_epochs=args.epochs,\r\n per_device_train_batch_size=args.batch_size_per_replica,\r\n gradient_accumulation_steps=args.grad_acc_steps,\r\n\r\n learning_rate=args.lr,\r\n weight_decay=0.0,\r\n warmup_steps=args.lr_warmup_steps,\r\n\r\n logging_dir=args.save_dir,\r\n logging_first_step=True,\r\n logging_steps=args.log_freq,\r\n save_total_limit=2,\r\n\r\n dataloader_drop_last=True,\r\n dataloader_num_workers=4,\r\n\r\n local_rank=args.local_rank,\r\n deepspeed=args.deepspeed,\r\n fp16=args.fp16,\r\n )\r\n\r\n trainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=train_data,\r\n )\r\n\r\n trainer.train()\r\n\r\n if args.local_rank in [0, -1]:\r\n final_checkpoint_dir = os.path.join(args.save_dir, \"final_checkpoint\")\r\n model.save_pretrained(final_checkpoint_dir)\r\n print(f' ==> Finish training and save to {final_checkpoint_dir}')\r\n\r\n\r\ndef load_tokenize_data(args):\r\n # Load and tokenize data\r\n if os.path.exists(args.cache_data):\r\n train_data = load_from_disk(args.cache_data)\r\n print(f' ==> Loaded {len(train_data)} samples')\r\n return train_data\r\n else:\r\n datasets = load_dataset('json', data_files=args.instruct_data_path)['train']\r\n tokenizer = AutoTokenizer.from_pretrained(args.load)\r\n\r\n def preprocess_function(examples):\r\n prompt_input, prompt_no_input = PROMPT_DICT[\"prompt_input\"], PROMPT_DICT[\"prompt_no_input\"]\r\n source = [prompt_input.format_map({'instruction': instruct, 'input': inp}) if inp != ''\r\n else prompt_no_input.format_map({'instruction': instruct})\r\n for instruct, inp in zip(examples[\"instruction\"], examples[\"input\"])]\r\n target = [src + output + tokenizer.eos_token for src, output in zip(source, examples[\"output\"])]\r\n\r\n model_inputs = tokenizer(source, max_length=args.max_len, padding=\"max_length\", truncation=True)\r\n labels = tokenizer(target, max_length=args.max_len, padding=\"max_length\", truncation=True)\r\n model_inputs[\"decoder_input_ids\"] = copy.deepcopy(labels[\"input_ids\"])\r\n\r\n # changing labels: convert all tokens in the duplicate prefix prompt and the padding part to -100\r\n eos_token_id = tokenizer.eos_token_id\r\n for x, y in zip(model_inputs[\"input_ids\"], labels[\"input_ids\"]):\r\n label_prefix_len = x.index(eos_token_id) if eos_token_id in x else len(x)\r\n y[:label_prefix_len] = [-100] * label_prefix_len\r\n\r\n if eos_token_id in y:\r\n pad_len = len(y) - y.index(eos_token_id) - 1\r\n if pad_len > 0:\r\n y[y.index(eos_token_id) + 1:] = [-100] * pad_len\r\n\r\n # shift labels to the right as the decoder input and add decoder start token id\r\n decoder_start_id = tokenizer.eos_token_id\r\n for z in model_inputs[\"decoder_input_ids\"]:\r\n z[1:] = z[:-1]\r\n z[0] = decoder_start_id\r\n\r\n model_inputs[\"labels\"] = copy.deepcopy(labels[\"input_ids\"])\r\n model_inputs[\"decoder_attention_mask\"] = labels[\"attention_mask\"]\r\n return model_inputs\r\n\r\n train_data = datasets.map(\r\n preprocess_function,\r\n batched=True,\r\n remove_columns=datasets.column_names,\r\n num_proc=64,\r\n load_from_cache_file=False,\r\n )\r\n\r\n print(f' ==> Loaded {len(train_data)} samples')\r\n train_data.save_to_disk(args.cache_data)\r\n print(f' ==> Saved to {args.cache_data}')\r\n return train_data\r\n\r\n\r\ndef main(args):\r\n argsdict = vars(args)\r\n print(pprint.pformat(argsdict))\r\n\r\n # Save command to file\r\n with open(os.path.join(args.save_dir, \"command.txt\"), 'w') as f:\r\n f.write(pprint.pformat(argsdict))\r\n\r\n # Load and tokenize data using the tokenizer from `args.load`. If the data is already cached, load it from there.\r\n # You can customize this function to load your own data for any Seq2Seq LM tasks.\r\n train_data = load_tokenize_data(args)\r\n\r\n if args.data_num != -1:\r\n train_data = train_data.select([i for i in range(args.data_num)])\r\n\r\n # Load model from `args.load`\r\n model = AutoModelForSeq2SeqLM.from_pretrained(args.load, torch_dtype=torch.float16,\r\n low_cpu_mem_usage=True, trust_remote_code=True)\r\n\r\n print(f\" ==> Loaded model from {args.load}, model size {model.num_parameters()}\")\r\n #freeze_decoder_except_xattn_codegen(model)\r\n\r\n run_training(args, model, train_data)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n parser = argparse.ArgumentParser(description=\"CodeT5+ instruction tuning\")\r\n parser.add_argument('--data-num', default=-1, type=int)\r\n parser.add_argument('--max-len', default=512, type=int)\r\n parser.add_argument('--instruct-data-path', default='code_alpaca_20k.json', type=str)\r\n parser.add_argument('--cache-data', default='cache_data/instructions', type=str)\r\n parser.add_argument('--load', default='Salesforce/codet5p-16b', type=str)\r\n\r\n # Training\r\n parser.add_argument('--epochs', default=3, type=int)\r\n parser.add_argument('--lr', default=2e-5, type=float)\r\n parser.add_argument('--lr-warmup-steps', default=30, type=int)\r\n parser.add_argument('--batch-size-per-replica', default=1, type=int)\r\n parser.add_argument('--grad-acc-steps', default=16, type=int)\r\n parser.add_argument('--local_rank', default=-1, type=int)\r\n parser.add_argument('--deepspeed', default=None, type=str)\r\n parser.add_argument('--fp16', default=False, action='store_true')\r\n\r\n # Logging and stuff\r\n parser.add_argument('--save-dir', default=\"saved_models/instruct_codet5p_16b\", type=str)\r\n parser.add_argument('--log-freq', default=10, type=int)\r\n parser.add_argument('--save-freq', default=500, type=int)\r\n\r\n args = parser.parse_args()\r\n\r\n os.makedirs(args.save_dir, exist_ok=True)\r\n\r\n main(args)\r\n```\r\n\r\ncommand:\r\n```\r\ndeepspeed CodeT5+/instruct_tune_codet5p.py --load $MODEL --save-dir $SAVE_DIR --instruct-data-path code_alpaca_2k.json --fp16 --deepspeed ~/transformers/tests/deepspeed/ds_config_zero3.json\r\n```\r\n\r\nOutput:\r\n\r\n\r\n- `Accelerate` version: 0.21.0.dev0\r\n- Platform: Linux-5.4.0-125-generic-x86_64-with-glibc2.31\r\n- Python version: 3.10.11\r\n- Numpy version: 1.24.4\r\n- PyTorch version (GPU?): 2.0.1 (True)\r\n- PyTorch XPU available: False\r\n- System RAM: 503.55 GB\r\n- GPU type: NVIDIA A100-SXM4-80GB\r\n- `Accelerate` default config:\r\n\tNot found\r\n- \r\n- `transformers` version: 4.31.0.dev0\r\n- Platform: Linux-5.4.0-125-generic-x86_64-with-glibc2.31\r\n- Python version: 3.10.11\r\n- Huggingface_hub version: 0.15.1\r\n- Safetensors version: 0.3.1\r\n- PyTorch version (GPU?): 2.0.1 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n\r\n```bash\r\n--------------------------------------------------\r\nDeepSpeed C++/CUDA extension op report\r\n--------------------------------------------------\r\nNOTE: Ops not installed will be just-in-time (JIT) compiled at\r\n runtime if needed. Op compatibility means that your system\r\n meet the required dependencies to JIT install the op.\r\n--------------------------------------------------\r\nJIT compiled ops requires ninja\r\nninja .................. [OKAY]\r\n--------------------------------------------------\r\nop name ................ installed .. compatible\r\n--------------------------------------------------\r\n [WARNING] async_io requires the dev libaio .so object and headers but these were not found.\r\n [WARNING] async_io: please install the libaio-dev package with apt\r\n [WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.\r\nasync_io ............... [NO] ....... [NO]\r\ncpu_adagrad ............ [NO] ....... [OKAY]\r\ncpu_adam ............... [NO] ....... [OKAY]\r\nfused_adam ............. [NO] ....... [OKAY]\r\nfused_lamb ............. [NO] ....... [OKAY]\r\nquantizer .............. [NO] ....... [OKAY]\r\nrandom_ltd ............. [NO] ....... [OKAY]\r\n [WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.0\r\n [WARNING] using untested triton version (2.0.0), only 1.0.0 is known to be compatible\r\nsparse_attn ............ [NO] ....... [NO]\r\nspatial_inference ...... [NO] ....... [OKAY]\r\ntransformer ............ [NO] ....... [OKAY]\r\nstochastic_transformer . [NO] ....... [OKAY]\r\ntransformer_inference .. [NO] ....... [OKAY]\r\n--------------------------------------------------\r\nDeepSpeed general environment info:\r\ntorch install path ............... ['/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch']\r\ntorch version .................... 2.0.1\r\ndeepspeed install path ........... ['/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/deepspeed']\r\ndeepspeed info ................... 0.9.5, unknown, unknown\r\ntorch cuda version ............... 11.8\r\ntorch hip version ................ None\r\nnvcc version ..................... 11.8\r\ndeepspeed wheel compiled w. ...... torch 2.0, cuda 11.8\r\n```\r\n\r\nTherefore, unable to reproduce your error. ",
"I see similar issue. ",
"I'm also having this issue!\r\n\r\nI believe it happens when you include an `\"optimizer\"` as part of your Deepspeed config. E.g.\r\n\r\n```json\r\n\"optimizer\": {\r\n \"type\": \"AdamW\",\r\n \"params\": {\r\n \"lr\": \"auto\",\r\n \"betas\": \"auto\",\r\n \"eps\": \"auto\",\r\n \"weight_decay\": \"auto\",\r\n },\r\n}\r\n```\r\n\r\nThen, `optimizer = DummyOptim(params=model_parameters)`.\r\n\r\nhttps://github.com/huggingface/transformers/blob/e42587f596181396e1c4b63660abf0c736b10dae/src/transformers/deepspeed.py#L282-L288\r\n\r\nAnd eventually `Trainer` tries to call `optimizer.step()`. However, `DummyOptim` doesn't have a `step()` function.\r\n\r\nhttps://github.com/huggingface/accelerate/blob/8514c35192ac9762920f1ab052e5cea4c0e46eeb/src/accelerate/utils/deepspeed.py#L226-L246\r\n\r\nI'm not sure what the appropriate fix is: should we not use `DummyOptim` at all or add a `step()` function (that does nothing?) to it?\r\n\r\nP.S. the same problem applies to `\"scheduler\"`.\r\n\r\nhttps://github.com/huggingface/transformers/blob/e42587f596181396e1c4b63660abf0c736b10dae/src/transformers/deepspeed.py#L303-L304\r\n\r\nPlease take a look, @pacman100 -- thanks!",
"@apoorvkh , post the `accelerator.prepare` they should be replaced with correct optimizer and scheduler from DeepSpeed and hence should not result in any issues. As shown above, unable to reproduce it. A minimal way to reproduce it would help me deep dive",
"See https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L1653-L1656 wherein it `accelerator.prepare` internally calls `deepspeed.initialize` and replace the dummy objects with appropriate ones returned by DeepSpeed.",
"Thanks, that was helpful! Using that information, we found that the cause was because `ACCELERATE_USE_DEEPSPEED=true` was not already being set internally. (If you set it manually, everything works as expected.)\r\n\r\nWe unfortunately can't very easily share a minimal reproducible example, but will continue to debug soon.",
"Keeping you posted, the issue I encountered was on our end: for our own reasons, we initialized `TrainingArguments` without `deepspeed = ...`, then set `TrainingArguments.deepspeed = ...` and called `TrainingArguments.__post_init__()` manually. So`__post_init__` is called twice in total for the same dataclass object.\r\n\r\nThis worked fine in previous versions of `transformers`, but not on the current `main` branch. This does not result in the same behavior related to Accelerate and Deepspeed as creating a TrainingArguments object with all the flags in one go.\r\n\r\nAnyway, just sharing in case you want to refine the internal behavior to fix this case. But it's not a problem if you use TrainingArguments in the standard way. I'm not sure if this is related to the what the original creator of this issue had encountered. Thanks!",
"> Keeping you posted, the issue I encountered was on our end: for our own reasons, we initialized `TrainingArguments` without `deepspeed = ...`, then set `TrainingArguments.deepspeed = ...` and called `TrainingArguments.__post_init__()` manually. So`__post_init__` is called twice in total for the same dataclass object.\r\n> \r\n> This worked fine in previous versions of `transformers`, but not on the current `main` branch. This does not result in the same behavior related to Accelerate and Deepspeed as creating a TrainingArguments object with all the flags in one go.\r\n> \r\n> Anyway, just sharing in case you want to refine the internal behavior to fix this case. But it's not a problem if you use TrainingArguments in the standard way. I'm not sure if this is related to the what the original creator of this issue had encountered. Thanks!\r\n\r\nwhat that's mean to \"use TrainingArguments in the standard way\"?\r\n ds_plugin = DeepSpeedPlugin(deepspeed_config)\r\n accelerator = Accelerator(deepspeed_plugin=ds_plugin)\r\n scheduler, optimizer = get_dummy_scheduler_optimizer(model)\r\n model, optimizer, train_dataset, scheduler = accelerator.prepare(model, optimizer, train_dataset, scheduler) \r\n\r\n\r\n training_args = TrainingArguments(\r\n output_dir=os.path.join(model_path, config_obj.model['checkpoints_path']),\r\n overwrite_output_dir=True,\r\n num_train_epochs=config_obj.model['epochs'],\r\n per_device_train_batch_size=config_obj.model['batch_size'],\r\n logging_strategy=\"no\",\r\n save_strategy=\"no\",\r\n deepspeed=deepspeed_config\r\n )\r\n\r\n # Initialize the Trainer\r\n trainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=train_dataset,\r\n )\r\n \r\ndeepspeed_config = {\r\n \"optimizer\": {\r\n \"type\": \"AdamW\",\r\n \"params\": {\r\n \"lr\": \"auto\",\r\n \"weight_decay\": \"auto\",\r\n }\r\n },\r\n \"scheduler\": {\r\n \"type\": \"WarmupLR\",\r\n \"params\": {\r\n \"warmup_min_lr\": \"auto\",\r\n \"warmup_max_lr\": \"auto\",\r\n \"warmup_num_steps\": \"auto\",\r\n }\r\n },\r\n \"zero_optimization\": {\r\n \"stage\": 3,\r\n \"offload_optimizer\": {\r\n \"device\": \"cpu\", \r\n \"pin_memory\": True\r\n }, \r\n \"offload_param\": {\r\n \"device\": \"cpu\", \r\n \"pin_memory\": True\r\n },\r\n \"contiguous_gradients\": True, \r\n },\r\n \"gradient_accumulation_steps\": \"auto\",\r\n \"gradient_clipping\": \"auto\",\r\n \"train_batch_size\": \"auto\",\r\n \"train_micro_batch_size_per_gpu\": \"auto\"\r\n }\r\n I wonder how to fix that problem...\r\neven with a trick like\r\n\r\nfrom accelerate.utils import DummyOptim, DummyScheduler\r\nclass DummyOptimizerWithStep(DummyOptim):\r\n def __init__(self, optimizer_grouped_parameters, *args, **kwargs):\r\n super().__init__(params=optimizer_grouped_parameters, *args, **kwargs)\r\n \r\n def step(self):\r\n pass\r\n \r\n I get the same error :( ",
"I was having this error too, it seemed like the issue was that the `distributed_state` attribute of the TrainingArguments object wasn't being set properly. I overrode it with the following code snippet:\r\n\r\n```python\r\nfrom transformers import TrainingArguments\r\nfrom accelerate.utils import DistributedType\r\n\r\nds_config = {\r\n \"zero_optimization\": {\r\n \"stage\": 2, # stage 2 is ideal, want to see if stage 1 avoids error\r\n \"offload_optimizer\": {\r\n \"device\": \"cpu\",\r\n \"pin_memory\": False # changed to False to see if memory usage is better\r\n },\r\n \"allgather_partitions\": True,\r\n \"allgather_bucket_size\": 2e8,\r\n \"reduce_scatter\": True,\r\n \"reduce_bucket_size\": 2e8,\r\n \"overlap_comm\": False,\r\n \"contiguous_gradients\": False,\r\n },\r\n \"optimizer\": {\r\n \"type\": \"Adam\",\r\n \"params\": {\r\n \"lr\": 1e-3,\r\n },\r\n },\r\n \"scheduler\": {\r\n \"type\": \"WarmupLR\",\r\n \"params\": {\r\n \"warmup_min_lr\": 1e-3,\r\n \"warmup_max_lr\": 1e-3,\r\n \"warmup_num_steps\": 100\r\n }\r\n },\r\n \"fp16\": {\r\n \"enabled\": False,\r\n },\r\n \"train_batch_size\": 8,\r\n \"train_micro_batch_sie_per_gpu\": 8,\r\n}\r\nargs = {\r\n 'group_by_length': True,\r\n 'per_device_train_batch_size': 1,\r\n 'evaluation_strategy': \"steps\",\r\n 'num_train_epochs': 4,\r\n 'gradient_checkpointing': True,\r\n 'fp16': False,\r\n 'save_steps': 400,\r\n 'eval_steps': 100,\r\n 'logging_steps': 100,\r\n 'learning_rate': 1e-3,\r\n 'warmup_steps': 100,\r\n 'save_total_limit': 2,\r\n 'push_to_hub': True,\r\n }\r\n\r\ntraining_args = TrainingArguments(deepspeed=ds_config, **args)\r\n# set distributed_state manually\r\ntraining_args.distributed_state.distributed_type = DistributedType.DEEPSPEED\r\n```\r\n\r\nAnd then I could run `trainer.train()` without any errors.\r\n\r\nHappy to provide more code or system info if that would help.",
"I have the same error.\r\n\r\n> I was having this error too, it seemed like the issue was that the distributed_state attribute of the TrainingArguments object wasn't being set properly.\r\n\r\n```python\r\n# set distributed_state manually\r\ntraining_args.distributed_state.distributed_type = DistributedType.DEEPSPEED\r\n```\r\n\r\nIf I set `DistributedType.DEEPSPEED`, the Train script requires \"mpi4py\" module.\r\n\r\n```\r\nModuleNotFoundError: No module named 'mpi4py'\r\n```\r\n\r\nIf it is possible, I don't want to install MPI when we train with single GPU. Are there any workaround to avoid installing MPI when using single GPU?",
"FWIW, this occurred for me because I was running my code with `python` instead of `deepspeed` but still had `--deepspeed myconfig.json` as one of the parameters.",
"The \r\n> training_args.distributed_state.distributed_type = DistributedType.DEEPSPEED\r\n\r\nshows me the error \r\nKeyError: 'train_micro_batch_size_per_gpu'",
"if it helps anyone else: I was executing `python <myscript.py>`. Instead changed it to: `python -m torch.distributed.launch --nproc_per_node 1 <myscript.py>` which fixed it for me.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"when i train model like: \r\n```\r\nexport CUDA_VISIBLE_DEVICES=1\r\npython run_clm.py ...\r\n```\r\nthen:\r\n\r\n\r\n\r\nfix:\r\n```\r\nmaster_port=$(shuf -n 1 -i 10000-65535)\r\ndeepspeed --include localhost:1 --master_port \"${master_port}\" run_clm.py \\\r\n```\r\n\r\n"
] | 1,688 | 1,702 | 1,698 |
NONE
| null |
### System Info
- `transformers` version: 4.31.0.dev0
- Platform: Linux-5.4.0-124-generic-x86_64-with-glibc2.31
- Python version: 3.11.3
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes 4*A100 80GB
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
I am trying to train a model from the given [script](https://github.com/salesforce/CodeT5/blob/main/CodeT5%2B/instruct_tune_codet5p.py) in a single node multi-GPU setting
with DeepSpeed integration and am getting the error as given below
To reproduce one can download the script from [here](https://github.com/salesforce/CodeT5/blob/main/CodeT5%2B) along with the config file and try to run it with the [CodeAlpaca dataset](https://raw.githubusercontent.com/sahil280114/codealpaca/master/data/code_alpaca_20k.json)
```
{'batch_size_per_replica': 1,
'cache_data': 'cache_data/instructions',
'data_num': -1,
'deepspeed': 'deepspeed_config.json',
'epochs': 3,
'fp16': False,
'grad_acc_steps': 16,
'instruct_data_path': 'code_alpaca_20k.json',
'load': 'codet5p-16b',
'local_rank': -1,
'log_freq': 10,
'lr': 2e-05,
'lr_warmup_steps': 30,
'max_len': 512,
'save_dir': 'saved_models/instruct_codet5p_16b',
'save_freq': 500}
==> Loaded 20022 samples
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:25<00:00, 5.15s/it]
==> Loaded model from codet5p-16b, model size 16493680640
Para before freezing: 16493680640, trainable para: 16494M
Para after freezing: 16493680640, trainable para: 462M
Starting main loop
0%| | 0/936 [00:00<?, ?it/s]/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.
warnings.warn('Was asked to gather along dimension 0, but all '
Traceback (most recent call last):
File "/root/Custom-LLM/CodeT5/CodeT5+/instruct_tune_codet5p.py", line 212, in <module>
main(args)
File "/root/Custom-LLM/CodeT5/CodeT5+/instruct_tune_codet5p.py", line 181, in main
run_training(args, model, train_data)
File "/root/Custom-LLM/CodeT5/CodeT5+/instruct_tune_codet5p.py", line 93, in run_training
trainer.train()
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py", line 1537, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py", line 1881, in _inner_training_loop
self.optimizer.step()
^^^^^^^^^^^^^^^^^^^
AttributeError: 'DummyOptim' object has no attribute 'step'
```
The same script works completely fine in a single-GPU setting. But when I switch to a multi-GPU setup I get this error
### Expected behavior
Expect the training to proceed using Deepspeed as in the case of a single-GPU set up
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24640/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
}
|
https://api.github.com/repos/huggingface/transformers/issues/24640/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24639
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24639/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24639/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24639/events
|
https://github.com/huggingface/transformers/pull/24639
| 1,786,480,478 |
PR_kwDOCUB6oc5UisQ_
| 24,639 |
Generate: force cache with `inputs_embeds` forwarding
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,688 | 1,688 | 1,688 |
MEMBER
| null |
# What does this PR do?
Fixes the issue raised in [this comment](https://github.com/huggingface/transformers/issues/23042#issuecomment-1618513599).
The issue and the solution is described in the comment added alongside the change :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24639/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24639/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24639",
"html_url": "https://github.com/huggingface/transformers/pull/24639",
"diff_url": "https://github.com/huggingface/transformers/pull/24639.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24639.patch",
"merged_at": 1688404729000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24638
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24638/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24638/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24638/events
|
https://github.com/huggingface/transformers/issues/24638
| 1,786,385,839 |
I_kwDOCUB6oc5qehWv
| 24,638 |
attention weight clipping
|
{
"login": "StevenSong",
"id": 26208374,
"node_id": "MDQ6VXNlcjI2MjA4Mzc0",
"avatar_url": "https://avatars.githubusercontent.com/u/26208374?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StevenSong",
"html_url": "https://github.com/StevenSong",
"followers_url": "https://api.github.com/users/StevenSong/followers",
"following_url": "https://api.github.com/users/StevenSong/following{/other_user}",
"gists_url": "https://api.github.com/users/StevenSong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StevenSong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StevenSong/subscriptions",
"organizations_url": "https://api.github.com/users/StevenSong/orgs",
"repos_url": "https://api.github.com/users/StevenSong/repos",
"events_url": "https://api.github.com/users/StevenSong/events{/privacy}",
"received_events_url": "https://api.github.com/users/StevenSong/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi!\r\n\r\nThere are a few places in the library having something below\r\n\r\n```python\r\n # clamp inf values to enable fp16 training\r\n if hidden_states.dtype == torch.float16:\r\n max_dtype = torch.finfo(hidden_states.dtype).max\r\n clamp_value = torch.where(torch.isinf(hidden_states).any(), max_dtype - 1000, max_dtype)\r\n hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value)\r\n```\r\n\r\nYou can try similar thing locally :-)",
"@ydshieh thank you for the example, that's effectively exactly what I'm looking for. \r\n\r\nMy only concern is that implementing this clamping locally is far more overhead for an end user than implementing it in the codebase. My reasoning for this is because this inf is not really returned outside of the model forward call as it is passed into a softmax in the GPT2Attention block, and ultimately the user only sees the nans propagated by the inf in the softmax. So in order to enable this clamping, a user would have to override the forward call of `GPT2Attention` and ultimately subclass `GPT2Attention`, `GPT2Block`, `GPT2Model`, and whichever class they're using which contains the base model (eg `GPT2LMHeadModel`).",
"I understand @StevenSong . But it's better for you to try locally first to see if it actually solves the issue, or something more have to be done (like same fix at different places).\r\n\r\nIn theory, this kind of `overflows` can happen everywhere. In practice, we proably just need to add clamp at one or two places.\r\nAlso I would need to discuss internally to see if we want to do this change for a long-existing model like `gpt2`.",
"Just for my debugging, I ended up just modifying the source file at `models/gpt2/modeling_gpt2.py` and inserting the below chunk between these lines: https://github.com/huggingface/transformers/blob/4b26a61631b8fd30f845cf08ebcc5ed65fe83c9b/src/transformers/models/gpt2/modeling_gpt2.py#L203-L205\r\n\r\n`inf` clamping chunk, same as what was suggested above:\r\n```python\r\n if attn_weights.dtype == torch.float16:\r\n max_dtype = torch.finfo(attn_weights.dtype).max\r\n clamp_value = torch.where(torch.isinf(attn_weights).any(), max_dtype - 1000, max_dtype)\r\n attn_weights = torch.clamp(attn_weights, min=-clamp_value, max=clamp_value)\r\n```\r\n\r\nand on my test case, I can confirm that this results in non-nan loss and non-nan logits. I can continue with my training loop with no errors from backprop and the next batch also returns successfully",
"Thanks!",
"Oh, @StevenSong \r\n\r\nIt's for attn weight. I just remembered that for it, it's starndard to cast it to fp32 instead of staying in fp16 and use claming.\r\nCould you try if the following works in your case 🙏 ? Thanks a lot!\r\n\r\n```python\r\n # upcast to fp32 if the weights are in fp16. Please see https://github.com/huggingface/transformers/pull/17437\r\n if attn_weights.dtype == torch.float16:\r\n attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(torch.float16)\r\n else:\r\n attn_weights = nn.functional.softmax(attn_weights, dim=-1)\r\n```\r\n\r\n\r\n\r\n",
"Hi @ydshieh,\r\n\r\nI did notice this upcasting was already being done implicitly when adding the `attention_mask` to `attn_weights` at this line:\r\n\r\nhttps://github.com/huggingface/transformers/blob/4b26a61631b8fd30f845cf08ebcc5ed65fe83c9b/src/transformers/models/gpt2/modeling_gpt2.py#L207\r\n\r\nAs mentioned in the referenced PR (#17437), `attention_mask` is filled with very large negative values. edit: I guess this also depends on `attention_mask` being passed\r\n\r\nBut this is already far after the `inf` is introduced at the matmul of the query and key matrices together. I guess if those are cast to fp32, that would probably also fix it?",
"Yes, the large negative value depends on the dtype\r\n\r\n\r\nhttps://github.com/huggingface/transformers/blob/4b26a61631b8fd30f845cf08ebcc5ed65fe83c9b/src/transformers/models/gpt2/modeling_gpt2.py#L822-L823.\r\n\r\nIf we upcast to fp32 as mentioned in my previous comment, it should be fine. This is done in a few models, like `opt` or `xglm`. \r\n\r\nSee also #17437",
"> Yes, the large negative value depends on the dtype\r\n> \r\n> https://github.com/huggingface/transformers/blob/4b26a61631b8fd30f845cf08ebcc5ed65fe83c9b/src/transformers/models/gpt2/modeling_gpt2.py#L822-L823\r\n> \r\n> .\r\n\r\n\r\nInteresting, I'd have expected `attention_mask` to be `-65k` but I'm seeing `-3.4028e+38` as in fp32, yet my `attn_weights` are in fp16. Is this because of `autocast` only casting some but not all tensors ie query/key are cast to fp16 but the model dtype is still fp32?\r\n\r\n> If we upcast to fp32 as mentioned in my previous comment, it should be fine. This is done in a few models, like `opt` or `xglm`.\r\n> \r\n> See also #17437\r\n\r\nI agree, upcasting to fp32 should resolve the issue but I think it needs to be done earlier, at the level of query/key matrices. Otherwise the `inf` would just be upcast to fp32, no?",
"> Is this because of autocast only casting some but not all tensors ie query/key are cast to fp16 but the model dtype is still fp32?\r\n\r\nI didn't use this before personally, but from torch doc\r\n\r\n> where some operations use the torch.float32 (float) datatype and other operations use lower precision floating point datatype\r\n\r\nit looks the same as you mentioned.\r\n\r\nRegarding:\r\n\r\n> it needs to be done earlier, at the level of query/key matrices. Otherwise the inf would just be upcast to fp32, no?\r\n\r\nThe `inf` you see on query/key matrices might be also a consequence of the computation on attn_weight or its softmax in an earlier step. It's not easy to say for sure where the problem starts to accumulate (even they might not cause failure at that early time).\r\n\r\nLet's try the more standard approaces where people suggest to using fp32 for softmax (and let's upcast before this while adding attention mask) and see how things go 🤗 .\r\n",
"Got to say my above comment is based on my experience on FP16, not with autocast. From your description, you mentioned the `attn_weights + attention_mask` would already be in FP32. It's good idea to double check this (what's the dtype of this step's output) and if the following softmax takes places in FP32 too. \r\n\r\nHowever, it doesn't hurt to give it a try 🤗 ",
"apologies for the late reply to this thread, here's what I've tried and what's worked/not worked (and by works, I mean if `inf` and `nan` no longer appear in `attn_weights` and loss is non-`nan`):\r\n\r\n1. upcasting at the softmax call (see below for code): this does NOT work as `attn_weights.dtype` is no longer fp16 after adding `attention_mask` in fp32. so the `inf` is still passed to softmax and we get `nan`s.\r\n```python\r\n def _attn(self, query, key, value, attention_mask=None, head_mask=None):\r\n [...]\r\n if attention_mask is not None:\r\n # Apply the attention mask\r\n attn_weights = attn_weights + attention_mask\r\n\r\n if attn_weights.dtype == torch.float16:\r\n attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(torch.float16)\r\n else:\r\n attn_weights = nn.functional.softmax(attn_weights, dim=-1)\r\n```\r\n2. explicitly upcasting before summing: this does NOT work as `attn_weights` already contains the `inf` and so is simply upcasting `inf` from fp16 to fp32.\r\n```python\r\n def _attn(self, query, key, value, attention_mask=None, head_mask=None):\r\n [...]\r\n attn_weights = attn_weights.to(torch.float32)\r\n if attention_mask is not None:\r\n # Apply the attention mask\r\n attn_weights = attn_weights + attention_mask\r\n\r\n attn_weights = nn.functional.softmax(attn_weights, dim=-1)\r\n```\r\n3. explicitly upcasting query and key matrices: this also does NOT work. this is because even though query and key are indeed in fp32, within the `autocast` context, `torch.matmul` downcasts back to fp16! \r\n```python\r\n def _attn(self, query, key, value, attention_mask=None, head_mask=None):\r\n query = query.to(torch.float32)\r\n key = key.to(torch.float32)\r\n attn_weights = torch.matmul(query, key.transpose(-1, -2))\r\n [...]\r\n```\r\n4. explicitly upcasting query and key with enforced fp32: the reason I'm so insistent on upcasting query and key is because I've already found that the line which produces the `inf` is the matmul between the query and key matrices, as I mentioned in the original post. However, as seen in the above attempt, the `autocast` context does not respect the explicit cast. So we need to disable the autocast context, if it exists ([see relevant docs](https://pytorch.org/docs/stable/notes/amp_examples.html#autocast-and-custom-autograd-functions)). Thus `attn_weights` is finally in fp32 as the product of two fp32 matrices.\r\n```python\r\n def _attn(self, query, key, value, attention_mask=None, head_mask=None):\r\n query = query.to(torch.float32)\r\n key = key.to(torch.float32)\r\n if torch.is_autocast_enabled():\r\n with torch.amp.autocast(device_type=query.device.type, enabled=False):\r\n attn_weights = torch.matmul(query, key.transpose(-1, -2))\r\n else:\r\n attn_weights = torch.matmul(query, key.transpose(-1, -2))\r\n```\r\n\r\nI'd also put forth that upcasting the query and key vectors to fp32 is a generalizable solution, as `attn_weights` is then always fp32 and all subsequent operations with `attn_weights` in the attention block are also implicitly upcast to fp32. It can always be downcast later, in fact it seems like this was already considered in this same function at this line:\r\nhttps://github.com/huggingface/transformers/blob/fe861e578f50dc9c06de33cd361d2f625017e624/src/transformers/models/gpt2/modeling_gpt2.py#L211-L213\r\n",
"Is there a good way for me to share my example I've been debugging? I've pared it down to a single script with a specific batch which results in the `inf`/`nan`, the base model is on the hub already, and I'm specifically doing prompt tuning so there's only something like extra ~20K params for the example to work",
"@StevenSong \r\n\r\n- thank you for the hard work on doing experiments! ❤️ \r\n- You can post the script as in a comment, or maybe create a colab notebook and share with us 🙏 \r\n\r\nAlso, I haven't asked (I think) previously: could you share us the full error log (I know it's inf thing, but would like to see the log). Thank you 🙏 \r\n\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,691 | 1,691 |
NONE
| null |
### Feature request
If the attention weight overflows, probably when using float16 during mixed precision training, clip the weight to some configurable value.
### Motivation
I’m training a `gpt2` model with auto mixed precision with `torch.amp.autocast` and I noticed I’m running into `nan` loss values during training. I tracked the source of the `nan` to a `softmax` computation where there’s a single `inf` in the input to the `softmax`. The `inf` is coming from a matrix multiply of the query and key matrices to calculate attention weights. this line: https://github.com/huggingface/transformers/blob/4b26a61631b8fd30f845cf08ebcc5ed65fe83c9b/src/transformers/models/gpt2/modeling_gpt2.py#L184.
Specifically the dot product of two vectors from query/key overflows the `float16` dtype.
### Your contribution
Would a simple `torch.clamp` call work/be correct?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24638/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24638/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24637
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24637/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24637/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24637/events
|
https://github.com/huggingface/transformers/issues/24637
| 1,786,343,818 |
I_kwDOCUB6oc5qeXGK
| 24,637 |
TFOPTForCausalLM Attention mask size mismatch exception
|
{
"login": "abb128",
"id": 65567823,
"node_id": "MDQ6VXNlcjY1NTY3ODIz",
"avatar_url": "https://avatars.githubusercontent.com/u/65567823?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abb128",
"html_url": "https://github.com/abb128",
"followers_url": "https://api.github.com/users/abb128/followers",
"following_url": "https://api.github.com/users/abb128/following{/other_user}",
"gists_url": "https://api.github.com/users/abb128/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abb128/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abb128/subscriptions",
"organizations_url": "https://api.github.com/users/abb128/orgs",
"repos_url": "https://api.github.com/users/abb128/repos",
"events_url": "https://api.github.com/users/abb128/events{/privacy}",
"received_events_url": "https://api.github.com/users/abb128/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @Rocketknight1 ",
"Yep, something is clearly being mangled in here. The `hidden_states` shape of `(1, 0, 768)` is alarming - there's obviously some incorrect array slicing happening somewhere. I'll investigate as soon as I get a chance, but if you want to try taking a look before then, the relevant code is [all in this file](https://github.com/huggingface/transformers/blob/main/src/transformers/models/opt/modeling_tf_opt.py). If you want to try debugging it yourself, I'd advise:\r\n\r\n1) Clone `transformers` yourself: `git clone https://github.com/huggingface/transformers.git`\r\n2) Make an editable install from that local repo: `cd transformers && pip install -e .`\r\n3) Start putting `breakpoint()` or tests in the `modeling_tf_opt.py` file and seeing if you can find where the arrays get sliced down to length `0`.\r\n\r\nThat's a lot of work, though - if you can wait, I'll get around to it in a few days!",
"Unfortunately, I didn't manage to finish this before a holiday due to some more Falcon chaos - cc @gante if you get a chance, and if not I can take it when I get back!\r\n\r\nI identified the core problem as some confusion in the code about what the actual `seq_length` is. The first problem is [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/opt/modeling_tf_opt.py#L618) - it uses the sequence length from `input_ids` / `input_embeds` to build an `attention_mask` if one isn't provided, but the actual shape should be `(batch_size, seq_length + past_key_values_length)`, whereas this just builds one with shape `(batch_size, seq_length)`. \r\n\r\nHowever, fixing this led to other problems - the expanded/combined attention mask code also gets a bit confused when `past_key_values` is present. I'm not sure why generation tests don't pick this up, but possibly they explicitly pass an attention mask and avoid the issue!\r\n\r\nThis attention mask expansion code has been copied all around the codebase - I encountered in in PyTorch Falcon and BLOOM recently, where it also caused some problems. This might be worth doing a repo-wide refactor at some point, as I think the code is unclear and the variable names can be confusing, probably because it started as encoder-decoder code and is now being used to manage attention over past key-values.",
"Unrelated to this issue but for tflite export I end up having to do something hacky anyway to pass a custom past_key_values_length value, since the shape is dynamic and code cannot depend on it during tflite export (`past_key_values[0][0].shape[2]` just resolves to None and causes an exception later on trying to use None as a number). It'd be nice if there was a built-in way to pass a past_key_values_length value",
"Hi @abb128, good point! That might be a sign that we should be using `tf.shape()` instead, which will correctly allow the dynamic shape to be compiled. I'll investigate while I'm fixing the rest of this.",
"@abb128 I've filed a patch - please try it and let me know if it works for you!"
] | 1,688 | 1,694 | 1,694 |
NONE
| null |
### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu118 (False)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.6.11 (cpu)
- Jax version: 0.4.10
- JaxLib version: 0.4.10
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm trying to write my own decoding logic so I can export to TFLite (the app runs decoding logic itself, calling into the tflite model with past_key_values and input_ids but the code for that is a little more involved)
I'm not sure if I'm missing something important here but I was able to successfully export Whisper before with this sort of pattern
I've reduced the problem to this example:
[Colab Link](https://colab.research.google.com/drive/1chUspU_RBkHuZ12Ls3FKdLusmYeoXZC_?usp=sharing)
```py
import tensorflow as tf
from transformers import AutoTokenizer, TFOPTForCausalLM, TFGPT2LMHeadModel
def decoding_example(model, tokenizer):
input_ids = tf.convert_to_tensor([[1]]) * int(tokenizer.bos_token_id)
outputs = model(input_ids, return_dict=True, use_cache=True, past_key_values=None)
past_key_values = outputs.past_key_values
max_new_tokens = 8
for i in range(max_new_tokens):
print(i)
decoded_next_token = 123 # just an example, this would depend on outputs.last_hidden_state
input_ids = tf.convert_to_tensor([[1]]) * decoded_next_token
outputs = model(input_ids, return_dict=True, use_cache=True, past_key_values=past_key_values)
past_key_values = outputs.past_key_values
print("Finished, all OK")
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-125m")
model = TFOPTForCausalLM.from_pretrained("facebook/opt-125m")
decoding_example(model, tokenizer) # fails
```
<details>
<summary>Output</summary>
```
0
---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
<ipython-input-5-07105bf5f115> in <cell line: 4>()
2 model = TFOPTForCausalLM.from_pretrained("facebook/opt-125m")
3
----> 4 decoding_example(model, tokenizer) # fails
9 frames
<ipython-input-3-94ad2e4e3e50> in decoding_example(model, tokenizer)
11 input_ids = tf.convert_to_tensor([[1]]) * decoded_next_token
12
---> 13 outputs = model(input_ids, return_dict=True, use_cache=True, past_key_values=past_key_values)
14 past_key_values = outputs.past_key_values
15
/usr/local/lib/python3.10/dist-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs)
68 # To get the full stack trace, call:
69 # `tf.debugging.disable_traceback_filtering()`
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
/usr/local/lib/python3.10/dist-packages/transformers/modeling_tf_utils.py in run_call_with_unpacked_inputs(self, *args, **kwargs)
440
441 unpacked_inputs = input_processing(func, config, **fn_args_and_kwargs)
--> 442 return func(self, **unpacked_inputs)
443
444 # Keras enforces the first layer argument to be passed, and checks it through `inspect.getfullargspec()`. This
/usr/local/lib/python3.10/dist-packages/transformers/models/opt/modeling_tf_opt.py in call(self, input_ids, past_key_values, attention_mask, position_ids, head_mask, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict, training, **kwargs)
956 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
957
--> 958 outputs = self.model(
959 input_ids=input_ids,
960 past_key_values=past_key_values,
/usr/local/lib/python3.10/dist-packages/transformers/modeling_tf_utils.py in run_call_with_unpacked_inputs(self, *args, **kwargs)
440
441 unpacked_inputs = input_processing(func, config, **fn_args_and_kwargs)
--> 442 return func(self, **unpacked_inputs)
443
444 # Keras enforces the first layer argument to be passed, and checks it through `inspect.getfullargspec()`. This
/usr/local/lib/python3.10/dist-packages/transformers/models/opt/modeling_tf_opt.py in call(self, input_ids, attention_mask, head_mask, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict, training, **kwargs)
730 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
731
--> 732 outputs = self.decoder(
733 input_ids,
734 attention_mask=attention_mask,
/usr/local/lib/python3.10/dist-packages/transformers/modeling_tf_utils.py in run_call_with_unpacked_inputs(self, *args, **kwargs)
440
441 unpacked_inputs = input_processing(func, config, **fn_args_and_kwargs)
--> 442 return func(self, **unpacked_inputs)
443
444 # Keras enforces the first layer argument to be passed, and checks it through `inspect.getfullargspec()`. This
/usr/local/lib/python3.10/dist-packages/transformers/models/opt/modeling_tf_opt.py in call(self, input_ids, inputs_embeds, attention_mask, head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict, training)
657 past_key_value = past_key_values[idx] if past_key_values is not None else None
658
--> 659 hidden_states, layer_self_attn, present_key_value = decoder_layer(
660 hidden_states,
661 attention_mask=attention_mask,
/usr/local/lib/python3.10/dist-packages/transformers/models/opt/modeling_tf_opt.py in call(self, hidden_states, attention_mask, layer_head_mask, past_key_value, training, output_attentions, use_cache)
323
324 # add present self-attn cache to positions 1,2 of present_key_value tuple
--> 325 hidden_states, self_attn_weights, present_key_value = self.self_attn(
326 hidden_states=hidden_states,
327 past_key_value=self_attn_past_key_value,
/usr/local/lib/python3.10/dist-packages/transformers/models/opt/modeling_tf_opt.py in call(self, hidden_states, key_value_states, past_key_value, attention_mask, layer_head_mask, training)
217
218 if attention_mask is not None:
--> 219 tf.debugging.assert_equal(
220 shape_list(attention_mask),
221 [bsz, 1, tgt_len, src_len],
InvalidArgumentError: Exception encountered when calling layer 'self_attn' (type TFOPTAttention).
Attention mask should be of size (1, 1, 0, 1), but is [1, 1, 1, 2]
Condition x == y did not hold.
Indices of first 2 different values:
[[2]
[3]]
Corresponding x values:
[1 2]
Corresponding y values:
[0 1]
First 3 elements of x:
[1 1 1]
First 3 elements of y:
[1 1 0]
Call arguments received by layer 'self_attn' (type TFOPTAttention):
• hidden_states=tf.Tensor(shape=(1, 0, 768), dtype=float32)
• key_value_states=None
• past_key_value=('tf.Tensor(shape=(1, 12, 1, 64), dtype=float32)', 'tf.Tensor(shape=(1, 12, 1, 64), dtype=float32)')
• attention_mask=tf.Tensor(shape=(1, 1, 1, 2), dtype=float32)
• layer_head_mask=None
• training=False
```
</details>
### Expected behavior
I expect it to work like it does with GPT2
```py
tokenizer = AutoTokenizer.from_pretrained("distilgpt2")
model = TFGPT2LMHeadModel.from_pretrained("distilgpt2")
decoding_example(model, tokenizer) # works
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24637/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24637/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24636
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24636/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24636/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24636/events
|
https://github.com/huggingface/transformers/pull/24636
| 1,786,039,477 |
PR_kwDOCUB6oc5UhNBy
| 24,636 |
Fix audio feature extractor deps
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,688 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
# What does this PR do?
The PR #21998 refactored many of the audio feature extractors to use a numpy backed for log Mel feature extraction (as opposed to `torchaudio` as was done previously). However, some of the feature extractors still required the `"speech"` backend for import, which states `torchaudio` as its sole dependency:
https://github.com/huggingface/transformers/blob/6eedfa6dd15dc1e22a55ae036f681914e5a0d9a1/src/transformers/utils/import_utils.py#L648-L650
This PR updates these four feature extractors to no longer require `"speech"`, since they're now numpy only.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24636/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24636/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24636",
"html_url": "https://github.com/huggingface/transformers/pull/24636",
"diff_url": "https://github.com/huggingface/transformers/pull/24636.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24636.patch",
"merged_at": 1688483007000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24635
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24635/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24635/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24635/events
|
https://github.com/huggingface/transformers/pull/24635
| 1,785,925,372 |
PR_kwDOCUB6oc5Ugz36
| 24,635 |
Generate: multi-device support for contrastive search
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"For future reference, here's the benchmark code:\r\n```py\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\nimport torch\r\nfrom tqdm import tqdm\r\n\r\n# Other configuration options\r\nDEVICE = \"cuda:0\"\r\nNUM_RUNS = 10\r\nMAX_NEW_TOKENS = 1000\r\nTEXT_INPUT = \"def sieve_of_eratosthenes():\"\r\n\r\n# Load the model and prepare generate args\r\nrepo_id = \"huggyllama/llama-7b\"\r\nmodel = AutoModelForCausalLM.from_pretrained(repo_id, device_map=\"auto\", load_in_4bit=True)\r\n\r\nassistant_model = None\r\ntokenizer = AutoTokenizer.from_pretrained(repo_id, use_fast=True)\r\nmodel_inputs = tokenizer(TEXT_INPUT, return_tensors=\"pt\").to(DEVICE)\r\n\r\ngenerate_kwargs = {\r\n \"max_new_tokens\": MAX_NEW_TOKENS,\r\n \"top_k\": 10,\r\n \"penalty_alpha\": 0.6,\r\n}\r\n\r\n# Warmup\r\nprint(\"Warming up...\")\r\nfor _ in range(2):\r\n gen_out = model.generate(**model_inputs, **generate_kwargs)\r\nprint(\"Done!\")\r\n\r\n\r\n# Measure OR Stream\r\ndef measure_generate(model, model_inputs, generate_kwargs):\r\n start_event = torch.cuda.Event(enable_timing=True)\r\n end_event = torch.cuda.Event(enable_timing=True)\r\n torch.cuda.reset_peak_memory_stats(DEVICE)\r\n torch.cuda.empty_cache()\r\n torch.cuda.synchronize()\r\n\r\n start_event.record()\r\n for _ in tqdm(range(NUM_RUNS)):\r\n gen_out = model.generate(**model_inputs, **generate_kwargs)\r\n end_event.record()\r\n\r\n torch.cuda.synchronize()\r\n max_memory = torch.cuda.max_memory_allocated(DEVICE)\r\n print(\"Max memory (MB): \", max_memory * 1e-6)\r\n print(\"Throughput (tokens/sec): \", (NUM_RUNS * MAX_NEW_TOKENS) / (start_event.elapsed_time(end_event) * 1.0e-3))\r\n\r\nmeasure_generate(model, model_inputs, generate_kwargs)\r\n```\r\n\r\nOn my end, with a RTX3090, I get 150 tokens/s before and after these changes.",
"@gante Thanks for adding the script! ❤️ "
] | 1,688 | 1,688 | 1,688 |
MEMBER
| null |
# What does this PR do?
Fixes #24634
In multi-gpu settings, the past KV cache may be scattered across devices -- the cache corresponding to a layer sits in the same device as the layer itself, and different layers may be in different devices.
In contrastive search, we must apply indexing operations on the past KV cache. The indexes are in a tensor, which sits on the same device as the model outputs by default. Applying these indexes on the past KV cache currently results in an exception if the model is split across devices (see the issue linked above).
This means we either move the indexing tensor to all possible devices or keep the tensor on CPU. Indexing is typically CPU-heavy on PyTorch, so the benchmarks on my end indicate that moving the indexing tensor to the CPU enables multi-device contrastive search without noticeable throughput degradation 🙌
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24635/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24635/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24635",
"html_url": "https://github.com/huggingface/transformers/pull/24635",
"diff_url": "https://github.com/huggingface/transformers/pull/24635.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24635.patch",
"merged_at": 1688396901000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24634
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24634/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24634/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24634/events
|
https://github.com/huggingface/transformers/issues/24634
| 1,785,800,026 |
I_kwDOCUB6oc5qcSVa
| 24,634 |
.generate() supports contrastive-search on multi-device?
|
{
"login": "pfldy2850",
"id": 9526337,
"node_id": "MDQ6VXNlcjk1MjYzMzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9526337?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pfldy2850",
"html_url": "https://github.com/pfldy2850",
"followers_url": "https://api.github.com/users/pfldy2850/followers",
"following_url": "https://api.github.com/users/pfldy2850/following{/other_user}",
"gists_url": "https://api.github.com/users/pfldy2850/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pfldy2850/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pfldy2850/subscriptions",
"organizations_url": "https://api.github.com/users/pfldy2850/orgs",
"repos_url": "https://api.github.com/users/pfldy2850/repos",
"events_url": "https://api.github.com/users/pfldy2850/events{/privacy}",
"received_events_url": "https://api.github.com/users/pfldy2850/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey @pfldy2850 👋 \r\n\r\nI believe I know the solution to your issues, but I don't have a multi-gpu setup. I'm going to open a PR, and then ask you to double-check whether it works :)",
"@pfldy2850 would you be able to test using [this PR](https://github.com/huggingface/transformers/pull/24635)?",
"@gante \r\n\r\nWow! Your outstanding work has successfully resolved the issue. 👍\r\nI have achieved an expected output that I was aiming for.\r\n\r\nI would like to use this changes in production. \r\nCould you please provide information on the release cycle of this repository?",
"@pfldy2850 awesome! The PR should be merged within 24 hours :) \r\n\r\nYou have two options, after the PR gets merged:\r\n1. Wait for the next release, which will probably happen in two or three weeks\r\n2. Install from `main` with `pip install --upgrade git+https://github.com/huggingface/transformers.git` OR replace the requirement on your `setup.py`/`requirements.txt` with `transformers @ git+https://github.com/huggingface/transformers.git`"
] | 1,688 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
### System Info
### script
```python
import torch
from transformers import AutoTokenizer
from transformers import AutoModelForCausalLM
checkpoint = "EleutherAI/polyglot-ko-12.8b"
tokenizer = AutoTokenizer.from_pretrained(
checkpoint,
padding_side="left",
pad_token_id=0,
)
model = AutoModelForCausalLM.from_pretrained(
checkpoint,
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
pad_token_id=tokenizer.pad_token_id,
)
model.eval()
tokenized = tokenizer("hi there?", return_tensors='pt')
input_ids = tokenized.input_ids
attention_mask = tokenized.attention_mask
generated = model.generate(input_ids, penalty_alpha=0.6, top_k=4, max_length=512)
```
### faced messages
when I ran upper script, I was faced following message.
```
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ in <module>:6 │
│ │
│ 3 input_ids = tokenized.input_ids │
│ 4 attention_mask = tokenized.attention_mask │
│ 5 │
│ ❱ 6 generated = model.generate(input_ids, penalty_alpha=0.6, top_k=4, max_length=512) │
│ 7 │
│ │
│ /opt/conda/envs/py3.10/lib/python3.10/site-packages/torch/utils/_contextlib.py:115 in │
│ decorate_context │
│ │
│ 112 │ @functools.wraps(func) │
│ 113 │ def decorate_context(*args, **kwargs): │
│ 114 │ │ with ctx_factory(): │
│ ❱ 115 │ │ │ return func(*args, **kwargs) │
│ 116 │ │
│ 117 │ return decorate_context │
│ 118 │
│ │
│ /opt/conda/envs/py3.10/lib/python3.10/site-packages/transformers/generation/utils.py:1544 in │
│ generate │
│ │
│ 1541 │ │ │ if not model_kwargs["use_cache"]: │
│ 1542 │ │ │ │ raise ValueError("Contrastive search requires `use_cache=True`") │
│ 1543 │ │ │ │
│ ❱ 1544 │ │ │ return self.contrastive_search( │
│ 1545 │ │ │ │ input_ids, │
│ 1546 │ │ │ │ top_k=generation_config.top_k, │
│ 1547 │ │ │ │ penalty_alpha=generation_config.penalty_alpha, │
│ │
│ /opt/conda/envs/py3.10/lib/python3.10/site-packages/torch/utils/_contextlib.py:115 in │
│ decorate_context │
│ │
│ 112 │ @functools.wraps(func) │
│ 113 │ def decorate_context(*args, **kwargs): │
│ 114 │ │ with ctx_factory(): │
│ ❱ 115 │ │ │ return func(*args, **kwargs) │
│ 116 │ │
│ 117 │ return decorate_context │
│ 118 │
│ │
│ /opt/conda/envs/py3.10/lib/python3.10/site-packages/transformers/generation/utils.py:2004 in │
│ contrastive_search │
│ │
│ 2001 │ │ │ │
│ 2002 │ │ │ logit_for_next_step = logits_processor(input_ids, logit_for_next_step) │
│ 2003 │ │ │ logit_for_next_step = logits_warper(input_ids, logit_for_next_step) │
│ ❱ 2004 │ │ │ next_probs = nn.functional.softmax(logit_for_next_step, dim=-1) │
│ 2005 │ │ │ top_k_probs, top_k_ids = torch.topk(next_probs, dim=-1, k=top_k) │
│ 2006 │ │ │ │
│ 2007 │ │ │ # Store scores, attentions and hidden_states when required │
│ │
│ /opt/conda/envs/py3.10/lib/python3.10/site-packages/torch/nn/functional.py:1843 in softmax │
│ │
│ 1840 │ if dim is None: │
│ 1841 │ │ dim = _get_softmax_dim("softmax", input.dim(), _stacklevel) │
│ 1842 │ if dtype is None: │
│ ❱ 1843 │ │ ret = input.softmax(dim) │
│ 1844 │ else: │
│ 1845 │ │ ret = input.softmax(dim, dtype=dtype) │
│ 1846 │ return ret │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: "softmax_lastdim_kernel_impl" not implemented for 'Half'
```
And then, I modified my script to `input_ids` move to `cuda:0` like this:
```
input_ids = tokenized.input_ids.to("cuda:0")
generated = model.generate(input_ids, penalty_alpha=0.6, top_k=4, max_length=512)
```
Finally I met following message:
```
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ in <module>:6 │
│ │
│ 3 input_ids = tokenized.input_ids.to("cuda:0") │
│ 4 attention_mask = tokenized.attention_mask.to("cuda:0") │
│ 5 │
│ ❱ 6 generated = model.generate(input_ids, penalty_alpha=0.6, top_k=4, max_length=512) │
│ 7 │
│ │
│ /opt/conda/envs/py3.10/lib/python3.10/site-packages/torch/utils/_contextlib.py:115 in │
│ decorate_context │
│ │
│ 112 │ @functools.wraps(func) │
│ 113 │ def decorate_context(*args, **kwargs): │
│ 114 │ │ with ctx_factory(): │
│ ❱ 115 │ │ │ return func(*args, **kwargs) │
│ 116 │ │
│ 117 │ return decorate_context │
│ 118 │
│ │
│ /opt/conda/envs/py3.10/lib/python3.10/site-packages/transformers/generation/utils.py:1544 in │
│ generate │
│ │
│ 1541 │ │ │ if not model_kwargs["use_cache"]: │
│ 1542 │ │ │ │ raise ValueError("Contrastive search requires `use_cache=True`") │
│ 1543 │ │ │ │
│ ❱ 1544 │ │ │ return self.contrastive_search( │
│ 1545 │ │ │ │ input_ids, │
│ 1546 │ │ │ │ top_k=generation_config.top_k, │
│ 1547 │ │ │ │ penalty_alpha=generation_config.penalty_alpha, │
│ │
│ /opt/conda/envs/py3.10/lib/python3.10/site-packages/torch/utils/_contextlib.py:115 in │
│ decorate_context │
│ │
│ 112 │ @functools.wraps(func) │
│ 113 │ def decorate_context(*args, **kwargs): │
│ 114 │ │ with ctx_factory(): │
│ ❱ 115 │ │ │ return func(*args, **kwargs) │
│ 116 │ │
│ 117 │ return decorate_context │
│ 118 │
│ │
│ /opt/conda/envs/py3.10/lib/python3.10/site-packages/transformers/generation/utils.py:2076 in │
│ contrastive_search │
│ │
│ 2073 │ │ │ │ # item is either the key or the value matrix │
│ 2074 │ │ │ │ for item in layer: │
│ 2075 │ │ │ │ │ item = torch.stack(torch.split(item, top_k, dim=0)) # [B, K, num_he │
│ ❱ 2076 │ │ │ │ │ item = item[range(batch_size), selected_idx, ...] # [B, num_head, s │
│ 2077 │ │ │ │ │ items += (item,) │
│ 2078 │ │ │ │ new_key_values += (items,) │
│ 2079 │ │ │ next_past_key_values = new_key_values │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cuda:1)
```
### transforers-cli env
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.30.2
- Platform: Linux-4.19.93-1.nbp.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes, I'm using 2 P40 cores in my script.
- Using distributed or parallel set-up in script?: No, but I'm using accelerators's `device_map="auto"` option to automatically split model weights.
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Just run the following generating script on the env **multi-device assigned**.
```python
import torch
from transformers import AutoTokenizer
from transformers import AutoModelForCausalLM
checkpoint = "EleutherAI/polyglot-ko-12.8b"
tokenizer = AutoTokenizer.from_pretrained(
checkpoint,
padding_side="left",
pad_token_id=0,
)
model = AutoModelForCausalLM.from_pretrained(
checkpoint,
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
pad_token_id=tokenizer.pad_token_id,
)
model.eval()
tokenized = tokenizer("hi there?", return_tensors='pt')
input_ids = tokenized.input_ids
attention_mask = tokenized.attention_mask
generated = model.generate(input_ids, penalty_alpha=0.6, top_k=4, max_length=512)
```
### Expected behavior
I want to obtain the generated text regardless of the outcome.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24634/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24634/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24633
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24633/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24633/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24633/events
|
https://github.com/huggingface/transformers/pull/24633
| 1,785,515,173 |
PR_kwDOCUB6oc5UfaQv
| 24,633 |
Pin `Pillow` for now
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Merge as the 2 failed tests seems flaky."
] | 1,688 | 1,688 | 1,688 |
COLLABORATOR
| null |
# What does this PR do?
`Pillow 10.0.0` is out 2 days ago.
Our CI get errors (via the usage of `detectron2`):
```bash
...
/usr/local/lib/python3.8/dist-packages/detectron2/data/transforms/transform.py:36: in <module>
...
> def __init__(self, src_rect, output_size, interp=Image.LINEAR, fill=0):
E AttributeError: module 'PIL.Image' has no attribute 'LINEAR'
```
This is due to the previous deprecation and the removal now
```bash
<stdin>:1: DeprecationWarning: LINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use BILINEAR or Resampling.BILINEAR instead.
```
This PR pins `Pillow` for now until `detectron2` fixes it.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24633/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24633/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24633",
"html_url": "https://github.com/huggingface/transformers/pull/24633",
"diff_url": "https://github.com/huggingface/transformers/pull/24633.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24633.patch",
"merged_at": 1688379887000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24632
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24632/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24632/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24632/events
|
https://github.com/huggingface/transformers/issues/24632
| 1,785,471,974 |
I_kwDOCUB6oc5qbCPm
| 24,632 |
TrOCRProcessor.from_pretrained raise KeyError(key)
|
{
"login": "laizhenhai88",
"id": 1310844,
"node_id": "MDQ6VXNlcjEzMTA4NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1310844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laizhenhai88",
"html_url": "https://github.com/laizhenhai88",
"followers_url": "https://api.github.com/users/laizhenhai88/followers",
"following_url": "https://api.github.com/users/laizhenhai88/following{/other_user}",
"gists_url": "https://api.github.com/users/laizhenhai88/gists{/gist_id}",
"starred_url": "https://api.github.com/users/laizhenhai88/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laizhenhai88/subscriptions",
"organizations_url": "https://api.github.com/users/laizhenhai88/orgs",
"repos_url": "https://api.github.com/users/laizhenhai88/repos",
"events_url": "https://api.github.com/users/laizhenhai88/events{/privacy}",
"received_events_url": "https://api.github.com/users/laizhenhai88/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi @laizhenhai88 \r\n\r\nIt seems you forgot to upload the tokenizer you loaded/used during training to your own model repo.\r\nIf you upload it. the issue you reported will disappear.\r\n\r\nLet us know if you have further question.",
"> Hi @laizhenhai88\r\n> \r\n> It seems you forgot to upload the tokenizer you loaded/used during training to your own model repo. If you upload it. the issue you reported will disappear.\r\n> \r\n> Let us know if you have further question.\r\n\r\nthanks!\r\n\r\nmy train code is \r\n```\r\ntrainer = Seq2SeqTrainer(\r\n model=model,\r\n tokenizer=processor.feature_extractor,\r\n args=args,\r\n compute_metrics=compute_metrics,\r\n train_dataset=train_ds,\r\n eval_dataset=test_ds,\r\n data_collator=default_data_collator\r\n)\r\n\r\ntrainer.train()\r\ntrainer.save_model()\r\ntrainer.save_state()\r\ntrainer.evaluate()\r\n```\r\n\r\nmaybe I need `trainer.push_to_hub(\"All Dunn!!!\")` ?",
"Yes, but you already have a repo, so I assume you already used `push_to_hub`. I am not sure why you don't have tokenizer on the repo then.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,691 | 1,691 |
NONE
| null |
### System Info
@amyeroberts
1. background
I fine-tuned a model of [microsoft/trocr-base-printed](https://huggingface.co/microsoft/trocr-base-printed)
the model is https://huggingface.co/hongyusir/trocr-base-printed_captcha_ocr
and I load my model raise KeyError(key)
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
2. the source code is:
```
import os, sys, itertools
os.environ['TOKENIZERS_PARALLELISM']='false'
import pandas as pd
from PIL import Image
import torch
from torch.utils.data import Dataset
import datasets
from datasets import load_dataset
import transformers
from transformers import Seq2SeqTrainingArguments, Seq2SeqTrainer
from transformers import VisionEncoderDecoderModel, TrOCRProcessor, default_data_collator
import evaluate
print("Python:".rjust(15), sys.version[0:6])
print("Pandas:".rjust(15), pd.__version__)
print("Datasets:".rjust(15), datasets.__version__)
print("Transformers:".rjust(15), transformers.__version__)
print("Torch:".rjust(15), torch.__version__)
print("load model")
processor = TrOCRProcessor.from_pretrained('hongyusir/trocr-base-printed_captcha_ocr')
model = VisionEncoderDecoderModel.from_pretrained('hongyusir/trocr-base-printed_captcha_ocr')
print("finish load model")
```
3.the output is:
```
Python: 3.8.10
Pandas: 2.0.3
Datasets: 2.13.1
Transformers: 4.30.2
Torch: 2.0.1+cu117
load model
Traceback (most recent call last):
File "x.py", line 27, in <module>
processor = TrOCRProcessor.from_pretrained('hongyusir/trocr-base-printed_captcha_ocr')
File "/home/pc/.local/lib/python3.8/site-packages/transformers/processing_utils.py", line 184, in from_pretrained
args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/home/pc/.local/lib/python3.8/site-packages/transformers/processing_utils.py", line 228, in _get_arguments_from_pretrained
args.append(attribute_class.from_pretrained(pretrained_model_name_or_path, **kwargs))
File "/home/pc/.local/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 707, in from_pretrained
tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)]
File "/home/pc/.local/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 665, in __getitem__
raise KeyError(key)
KeyError: <class 'transformers.models.vision_encoder_decoder.configuration_vision_encoder_decoder.VisionEncoderDecoderConfig'>
```
### Expected behavior
load model success
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24632/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24632/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24631
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24631/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24631/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24631/events
|
https://github.com/huggingface/transformers/issues/24631
| 1,785,422,203 |
I_kwDOCUB6oc5qa2F7
| 24,631 |
Fine tunning Bloom model - Failed to import transformers.training_args
|
{
"login": "seema-AIML",
"id": 83855785,
"node_id": "MDQ6VXNlcjgzODU1Nzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/83855785?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seema-AIML",
"html_url": "https://github.com/seema-AIML",
"followers_url": "https://api.github.com/users/seema-AIML/followers",
"following_url": "https://api.github.com/users/seema-AIML/following{/other_user}",
"gists_url": "https://api.github.com/users/seema-AIML/gists{/gist_id}",
"starred_url": "https://api.github.com/users/seema-AIML/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seema-AIML/subscriptions",
"organizations_url": "https://api.github.com/users/seema-AIML/orgs",
"repos_url": "https://api.github.com/users/seema-AIML/repos",
"events_url": "https://api.github.com/users/seema-AIML/events{/privacy}",
"received_events_url": "https://api.github.com/users/seema-AIML/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @seema-AIML\r\n\r\nCould you post the full trace log, please. Thank you in advance.",
"from transformers import AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"bigscience/bloom-560m\")\r\ndef tokenize_function(examples):\r\n return tokenizer(examples[\"text\"], padding=\"max_length\", truncation=True)\r\ntokenized_datasets = dataset.map(tokenize_function, batched=True)\r\n\r\nfrom transformers import AutoModelForSequenceClassification\r\nmodel = AutoModelForSequenceClassification.from_pretrained(\"bigscience/bloom-560m\", num_labels=5)\r\n\r\nfrom builtins import object\r\nfrom transformers import TrainingArguments\r\ntraining_args = TrainingArguments(output_dir=\"test_trainer\")\r\n\r\nWhile creating TrainingArguments getting below error\r\n\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n~\\Anaconda3\\lib\\site-packages\\transformers\\utils\\import_utils.py in _get_module(self, module_name)\r\n 1125 try:\r\n-> 1126 return importlib.import_module(\".\" + module_name, self.__name__)\r\n 1127 except Exception as e:\r\n\r\n~\\Anaconda3\\lib\\importlib\\__init__.py in import_module(name, package)\r\n 126 level += 1\r\n--> 127 return _bootstrap._gcd_import(name[level:], package, level)\r\n 128 \r\n\r\n~\\Anaconda3\\lib\\importlib\\_bootstrap.py in _gcd_import(name, package, level)\r\n\r\n~\\Anaconda3\\lib\\importlib\\_bootstrap.py in _find_and_load(name, import_)\r\n\r\n~\\Anaconda3\\lib\\importlib\\_bootstrap.py in _find_and_load_unlocked(name, import_)\r\n\r\n~\\Anaconda3\\lib\\importlib\\_bootstrap.py in _load_unlocked(spec)\r\n\r\n~\\Anaconda3\\lib\\importlib\\_bootstrap_external.py in exec_module(self, module)\r\n\r\n~\\Anaconda3\\lib\\importlib\\_bootstrap.py in _call_with_frames_removed(f, *args, **kwds)\r\n\r\n~\\Anaconda3\\lib\\site-packages\\transformers\\training_args.py in <module>\r\n 29 from .debug_utils import DebugOption\r\n---> 30 from .trainer_utils import (\r\n 31 EvaluationStrategy,\r\n\r\n~\\Anaconda3\\lib\\site-packages\\transformers\\trainer_utils.py in <module>\r\n 46 if is_tf_available():\r\n---> 47 import tensorflow as tf\r\n 48 \r\n\r\n~\\Anaconda3\\lib\\site-packages\\tensorflow\\__init__.py in <module>\r\n 40 \r\n---> 41 from tensorflow.python.tools import module_util as _module_util\r\n 42 from tensorflow.python.util.lazy_loader import LazyLoader as _LazyLoader\r\n\r\n~\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\__init__.py in <module>\r\n 45 # Bring in subpackages.\r\n---> 46 from tensorflow.python import data\r\n 47 from tensorflow.python import distribute\r\n\r\n~\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\data\\__init__.py in <module>\r\n 24 # pylint: disable=unused-import\r\n---> 25 from tensorflow.python.data import experimental\r\n 26 from tensorflow.python.data.ops.dataset_ops import AUTOTUNE\r\n\r\n~\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\data\\experimental\\__init__.py in <module>\r\n 96 # pylint: disable=unused-import\r\n---> 97 from tensorflow.python.data.experimental import service\r\n 98 from tensorflow.python.data.experimental.ops.batching import dense_to_ragged_batch\r\n\r\n~\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\data\\experimental\\service\\__init__.py in <module>\r\n 352 \r\n--> 353 from tensorflow.python.data.experimental.ops.data_service_ops import distribute\r\n 354 from tensorflow.python.data.experimental.ops.data_service_ops import from_dataset_id\r\n\r\n~\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\data\\experimental\\ops\\data_service_ops.py in <module>\r\n 25 from tensorflow.python.compat import compat\r\n---> 26 from tensorflow.python.data.experimental.ops import compression_ops\r\n 27 from tensorflow.python.data.experimental.ops.distribute_options import AutoShardPolicy\r\n\r\n~\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\data\\experimental\\ops\\compression_ops.py in <module>\r\n 19 \r\n---> 20 from tensorflow.python.data.util import structure\r\n 21 from tensorflow.python.ops import gen_experimental_dataset_ops as ged_ops\r\n\r\n~\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\data\\util\\structure.py in <module>\r\n 25 \r\n---> 26 from tensorflow.python.data.util import nest\r\n 27 from tensorflow.python.framework import composite_tensor\r\n\r\n~\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\data\\util\\nest.py in <module>\r\n 39 \r\n---> 40 from tensorflow.python.framework import sparse_tensor as _sparse_tensor\r\n 41 from tensorflow.python.util import _pywrap_utils\r\n\r\n~\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\framework\\sparse_tensor.py in <module>\r\n 27 from tensorflow.python.framework import composite_tensor\r\n---> 28 from tensorflow.python.framework import constant_op\r\n 29 from tensorflow.python.framework import dtypes\r\n\r\n~\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\framework\\constant_op.py in <module>\r\n 28 from tensorflow.python.eager import context\r\n---> 29 from tensorflow.python.eager import execute\r\n 30 from tensorflow.python.framework import dtypes\r\n\r\n~\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\eager\\execute.py in <module>\r\n 26 from tensorflow.python.eager import core\r\n---> 27 from tensorflow.python.framework import dtypes\r\n 28 from tensorflow.python.framework import ops\r\n\r\n~\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py in <module>\r\n 584 types_pb2.DT_STRING:\r\n--> 585 np.object,\r\n 586 types_pb2.DT_COMPLEX64:\r\n\r\n~\\Anaconda3\\lib\\site-packages\\numpy\\__init__.py in __getattr__(attr)\r\n 304 if attr in __former_attrs__:\r\n--> 305 raise AttributeError(__former_attrs__[attr])\r\n 306 \r\n\r\nAttributeError: module 'numpy' has no attribute 'object'.\r\n`np.object` was a deprecated alias for the builtin `object`. To avoid this error in existing code, use `object` by itself. Doing this will not modify any behavior and is safe. \r\nThe aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:\r\n https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations\r\n\r\nThe above exception was the direct cause of the following exception:\r\n__________________________________________________________________________________________________________________________\r\nRuntimeError Traceback (most recent call last)\r\n<ipython-input-10-fdfb390c11be> in <module>\r\n 1 from builtins import object\r\n----> 2 from transformers import TrainingArguments\r\n 3 \r\n 4 training_args = TrainingArguments(output_dir=\"test_trainer\")\r\n\r\n~\\Anaconda3\\lib\\importlib\\_bootstrap.py in _handle_fromlist(module, fromlist, import_, recursive)\r\n\r\n~\\Anaconda3\\lib\\site-packages\\transformers\\utils\\import_utils.py in __getattr__(self, name)\r\n 1114 value = self._get_module(name)\r\n 1115 elif name in self._class_to_module.keys():\r\n-> 1116 module = self._get_module(self._class_to_module[name])\r\n 1117 value = getattr(module, name)\r\n 1118 else:\r\n\r\n~\\Anaconda3\\lib\\site-packages\\transformers\\utils\\import_utils.py in _get_module(self, module_name)\r\n 1126 return importlib.import_module(\".\" + module_name, self.__name__)\r\n 1127 except Exception as e:\r\n-> 1128 raise RuntimeError(\r\n 1129 f\"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its\"\r\n 1130 f\" traceback):\\n{e}\"\r\n\r\nRuntimeError: Failed to import transformers.training_args because of the following error (look up to see its traceback):\r\nmodule 'numpy' has no attribute 'object'.\r\n`np.object` was a deprecated alias for the builtin `object`. To avoid this error in existing code, use `object` by itself. Doing this will not modify any behavior and is safe. \r\nThe aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:\r\n https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations\r\n____________________________________________________________________________________________________________________________\r\nHow to fix this?\r\n",
"The error occurs in tensorflow file.\r\n\r\n```bash\r\n~\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py in\r\n584 types_pb2.DT_STRING:\r\n--> 585 np.object,\r\n```\r\n\r\nIf you don't need tensorflow, the quick way to check is to uninstall tensorflow and see if the issue is resolved.\r\nYou can also try to create a new virtual environment, and install as `pip install transformers[torch]`.\r\n",
"created new a new virtual environment, and installed transformers[torch]. Still getting same error.\r\nI have not installed tensorflow in new virtual environment. when tried to uninstall tensorflow getting warning as WARNING: Skipping tensorflow as it is not installed.",
"Please provide the new full error log (the one that is run within the new environment).",
"---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n~\\Anaconda3\\lib\\site-packages\\transformers\\utils\\import_utils.py in _get_module(self, module_name)\r\n 1125 try:\r\n-> 1126 return importlib.import_module(\".\" + module_name, self.__name__)\r\n 1127 except Exception as e:\r\n\r\n~\\Anaconda3\\lib\\importlib\\__init__.py in import_module(name, package)\r\n 126 level += 1\r\n--> 127 return _bootstrap._gcd_import(name[level:], package, level)\r\n 128 \r\n\r\n~\\Anaconda3\\lib\\importlib\\_bootstrap.py in _gcd_import(name, package, level)\r\n\r\n~\\Anaconda3\\lib\\importlib\\_bootstrap.py in _find_and_load(name, import_)\r\n\r\n~\\Anaconda3\\lib\\importlib\\_bootstrap.py in _find_and_load_unlocked(name, import_)\r\n\r\n~\\Anaconda3\\lib\\importlib\\_bootstrap.py in _load_unlocked(spec)\r\n\r\n~\\Anaconda3\\lib\\importlib\\_bootstrap_external.py in exec_module(self, module)\r\n\r\n~\\Anaconda3\\lib\\importlib\\_bootstrap.py in _call_with_frames_removed(f, *args, **kwds)\r\n\r\n~\\Anaconda3\\lib\\site-packages\\transformers\\training_args.py in <module>\r\n 29 from .debug_utils import DebugOption\r\n---> 30 from .trainer_utils import (\r\n 31 EvaluationStrategy,\r\n\r\n~\\Anaconda3\\lib\\site-packages\\transformers\\trainer_utils.py in <module>\r\n 46 if is_tf_available():\r\n---> 47 import tensorflow as tf\r\n 48 \r\n\r\n~\\Anaconda3\\lib\\site-packages\\tensorflow\\__init__.py in <module>\r\n 40 \r\n---> 41 from tensorflow.python.tools import module_util as _module_util\r\n 42 from tensorflow.python.util.lazy_loader import LazyLoader as _LazyLoader\r\n\r\n~\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\__init__.py in <module>\r\n 45 # Bring in subpackages.\r\n---> 46 from tensorflow.python import data\r\n 47 from tensorflow.python import distribute\r\n\r\n~\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\data\\__init__.py in <module>\r\n 24 # pylint: disable=unused-import\r\n---> 25 from tensorflow.python.data import experimental\r\n 26 from tensorflow.python.data.ops.dataset_ops import AUTOTUNE\r\n\r\n~\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\data\\experimental\\__init__.py in <module>\r\n 96 # pylint: disable=unused-import\r\n---> 97 from tensorflow.python.data.experimental import service\r\n 98 from tensorflow.python.data.experimental.ops.batching import dense_to_ragged_batch\r\n\r\n~\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\data\\experimental\\service\\__init__.py in <module>\r\n 352 \r\n--> 353 from tensorflow.python.data.experimental.ops.data_service_ops import distribute\r\n 354 from tensorflow.python.data.experimental.ops.data_service_ops import from_dataset_id\r\n\r\n~\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\data\\experimental\\ops\\data_service_ops.py in <module>\r\n 25 from tensorflow.python.compat import compat\r\n---> 26 from tensorflow.python.data.experimental.ops import compression_ops\r\n 27 from tensorflow.python.data.experimental.ops.distribute_options import AutoShardPolicy\r\n\r\n~\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\data\\experimental\\ops\\compression_ops.py in <module>\r\n 19 \r\n---> 20 from tensorflow.python.data.util import structure\r\n 21 from tensorflow.python.ops import gen_experimental_dataset_ops as ged_ops\r\n\r\n~\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\data\\util\\structure.py in <module>\r\n 25 \r\n---> 26 from tensorflow.python.data.util import nest\r\n 27 from tensorflow.python.framework import composite_tensor\r\n\r\n~\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\data\\util\\nest.py in <module>\r\n 39 \r\n---> 40 from tensorflow.python.framework import sparse_tensor as _sparse_tensor\r\n 41 from tensorflow.python.util import _pywrap_utils\r\n\r\n~\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\framework\\sparse_tensor.py in <module>\r\n 27 from tensorflow.python.framework import composite_tensor\r\n---> 28 from tensorflow.python.framework import constant_op\r\n 29 from tensorflow.python.framework import dtypes\r\n\r\n~\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\framework\\constant_op.py in <module>\r\n 28 from tensorflow.python.eager import context\r\n---> 29 from tensorflow.python.eager import execute\r\n 30 from tensorflow.python.framework import dtypes\r\n\r\n~\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\eager\\execute.py in <module>\r\n 26 from tensorflow.python.eager import core\r\n---> 27 from tensorflow.python.framework import dtypes\r\n 28 from tensorflow.python.framework import ops\r\n\r\n~\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py in <module>\r\n 584 types_pb2.DT_STRING:\r\n--> 585 np.object,\r\n 586 types_pb2.DT_COMPLEX64:\r\n\r\n~\\Anaconda3\\lib\\site-packages\\numpy\\__init__.py in __getattr__(attr)\r\n 304 if attr in __former_attrs__:\r\n--> 305 raise AttributeError(__former_attrs__[attr])\r\n 306 \r\n\r\nAttributeError: module 'numpy' has no attribute 'object'.\r\n`np.object` was a deprecated alias for the builtin `object`. To avoid this error in existing code, use `object` by itself. Doing this will not modify any behavior and is safe. \r\nThe aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:\r\n https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nRuntimeError Traceback (most recent call last)\r\n<ipython-input-15-e0222726b472> in <module>\r\n----> 1 from transformers import TrainingArguments\r\n 2 \r\n 3 training_args = TrainingArguments(output_dir=\"test_trainer\")\r\n\r\n~\\Anaconda3\\lib\\importlib\\_bootstrap.py in _handle_fromlist(module, fromlist, import_, recursive)\r\n\r\n~\\Anaconda3\\lib\\site-packages\\transformers\\utils\\import_utils.py in __getattr__(self, name)\r\n 1114 value = self._get_module(name)\r\n 1115 elif name in self._class_to_module.keys():\r\n-> 1116 module = self._get_module(self._class_to_module[name])\r\n 1117 value = getattr(module, name)\r\n 1118 else:\r\n\r\n~\\Anaconda3\\lib\\site-packages\\transformers\\utils\\import_utils.py in _get_module(self, module_name)\r\n 1126 return importlib.import_module(\".\" + module_name, self.__name__)\r\n 1127 except Exception as e:\r\n-> 1128 raise RuntimeError(\r\n 1129 f\"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its\"\r\n 1130 f\" traceback):\\n{e}\"\r\n\r\nRuntimeError: Failed to import transformers.training_args because of the following error (look up to see its traceback):\r\nmodule 'numpy' has no attribute 'object'.\r\n`np.object` was a deprecated alias for the builtin `object`. To avoid this error in existing code, use `object` by itself. Doing this will not modify any behavior and is safe. \r\nThe aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:\r\n https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations\r\n\r\n\r\nIts same error",
"The error still shows `tensorflow` is in your environment.\r\n\r\nCould you show us the results of `transformers-cli env`, `pip show tensorflow` and `pip show tensorflow-cpu`",
"result of transformers-cli env\r\n\r\n- `transformers` version: 4.30.2\r\n- Platform: Windows-10-10.0.19041-SP0\r\n- Python version: 3.8.8\r\n- Huggingface_hub version: 0.15.1\r\n- Safetensors version: 0.3.1\r\n- PyTorch version (GPU?): 2.0.1+cpu (False)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n\r\n(hface) (base) D:\\>pip show tensorflow\r\nWARNING: Package(s) not found: tensorflow\r\n\r\n(hface) (base) D:\\>pip show tensorflow-cpu\r\nWARNING: Package(s) not found: tensorflow-cpu\r\n\r\n\r\n",
"Hmm. The TF detection logic is in the following block.\r\n\r\nhttps://github.com/huggingface/transformers/blob/cd4584e3c809bb9e1392ccd3fe38b40daba5519a/src/transformers/utils/import_utils.py#L144-L183\r\n\r\nYou env. might still have something listed in\r\n\r\nhttps://github.com/huggingface/transformers/blob/cd4584e3c809bb9e1392ccd3fe38b40daba5519a/src/transformers/utils/import_utils.py#L155-L165\r\n\r\nYou can either check each of them and uninstall if they appear. Otherwise much easier, you can try to set the env. varialbe `USE_TF` to `False`, either by `set USE_TF=0` or `export USE_TF=0`",
"I have set USE_TF = 0 \r\n\r\n%env USE_TF=0\r\nfrom transformers import AutoTokenizer, BartForConditionalGeneration, Trainer, TrainingArguments\r\nmodel = BartForConditionalGeneration.from_pretrained(\"facebook/bart-base\")\r\ntraining_args = TrainingArguments(\r\n output_dir='./results', # output directory\r\n num_train_epochs=3, # total number of training epochs\r\n per_device_train_batch_size=16, # batch size per device during training\r\n per_device_eval_batch_size=64, # batch size for evaluation\r\n warmup_steps=500, # number of warmup steps for learning rate scheduler\r\n weight_decay=0.01, # strength of weight decay\r\n logging_dir='./logs', # directory for storing logs\r\n logging_steps=10,\r\n)\r\n\r\ntrainer = Trainer(\r\n model=model, \r\n args=training_args, \r\n train_dataset=train_dataset, \r\n eval_dataset=val_dataset \r\n)\r\ntrainer.train()\r\n\r\nStill same error\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n<ipython-input-10-d3d13a2b0587> in <module>\r\n 17 \r\n 18 \r\n---> 19 trainer = Trainer(\r\n 20 model=model, # the instantiated Transformers model to be trained\r\n 21 args=training_args, # training arguments, defined above\r\n\r\n~\\Anaconda3\\lib\\site-packages\\transformers\\trainer.py in __init__(self, model, args, data_collator, train_dataset, eval_dataset, tokenizer, model_init, compute_metrics, callbacks, optimizers, preprocess_logits_for_metrics)\r\n 517 default_callbacks = DEFAULT_CALLBACKS + get_reporting_integration_callbacks(self.args.report_to)\r\n 518 callbacks = default_callbacks if callbacks is None else default_callbacks + callbacks\r\n--> 519 self.callback_handler = CallbackHandler(\r\n 520 callbacks, self.model, self.tokenizer, self.optimizer, self.lr_scheduler\r\n 521 )\r\n\r\n~\\Anaconda3\\lib\\site-packages\\transformers\\trainer_callback.py in __init__(self, callbacks, model, tokenizer, optimizer, lr_scheduler)\r\n 294 self.callbacks = []\r\n 295 for cb in callbacks:\r\n--> 296 self.add_callback(cb)\r\n 297 self.model = model\r\n 298 self.tokenizer = tokenizer\r\n\r\n~\\Anaconda3\\lib\\site-packages\\transformers\\trainer_callback.py in add_callback(self, callback)\r\n 311 \r\n 312 def add_callback(self, callback):\r\n--> 313 cb = callback() if isinstance(callback, type) else callback\r\n 314 cb_class = callback if isinstance(callback, type) else callback.__class__\r\n 315 if cb_class in [c.__class__ for c in self.callbacks]:\r\n\r\n~\\Anaconda3\\lib\\site-packages\\transformers\\integrations.py in __init__(self)\r\n 926 if not is_mlflow_available():\r\n 927 raise RuntimeError(\"MLflowCallback requires mlflow to be installed. Run `pip install mlflow`.\")\r\n--> 928 import mlflow\r\n 929 \r\n 930 self._MAX_PARAM_VAL_LENGTH = mlflow.utils.validation.MAX_PARAM_VAL_LENGTH\r\n\r\n~\\Anaconda3\\lib\\site-packages\\mlflow\\__init__.py in <module>\r\n 48 try:\r\n 49 # pylint: disable=unused-import\r\n---> 50 import mlflow.catboost as catboost # noqa: E402\r\n 51 import mlflow.fastai as fastai # noqa: E402\r\n 52 import mlflow.gluon as gluon # noqa: E402\r\n\r\n~\\Anaconda3\\lib\\site-packages\\mlflow\\catboost.py in <module>\r\n 22 \r\n 23 import mlflow\r\n---> 24 from mlflow import pyfunc\r\n 25 from mlflow.models import Model, ModelInputExample\r\n 26 from mlflow.models.model import MLMODEL_FILE_NAME\r\n\r\n~\\Anaconda3\\lib\\site-packages\\mlflow\\pyfunc\\__init__.py in <module>\r\n 217 from typing import Any, Union, List, Dict\r\n 218 import mlflow\r\n--> 219 import mlflow.pyfunc.model\r\n 220 import mlflow.pyfunc.utils\r\n 221 from mlflow.models import Model, ModelSignature, ModelInputExample\r\n\r\n~\\Anaconda3\\lib\\site-packages\\mlflow\\pyfunc\\model.py in <module>\r\n 15 import mlflow.utils\r\n 16 from mlflow.exceptions import MlflowException\r\n---> 17 from mlflow.models import Model\r\n 18 from mlflow.models.model import MLMODEL_FILE_NAME\r\n 19 from mlflow.protos.databricks_pb2 import INVALID_PARAMETER_VALUE\r\n\r\n~\\Anaconda3\\lib\\site-packages\\mlflow\\models\\__init__.py in <module>\r\n 24 from .model import Model\r\n 25 from .flavor_backend import FlavorBackend\r\n---> 26 from .signature import ModelSignature, infer_signature\r\n 27 from .utils import ModelInputExample\r\n 28 from ..utils.environment import infer_pip_requirements\r\n\r\n~\\Anaconda3\\lib\\site-packages\\mlflow\\models\\signature.py in <module>\r\n 10 import numpy as np\r\n 11 \r\n---> 12 from mlflow.types.schema import Schema\r\n 13 from mlflow.types.utils import _infer_schema\r\n 14 \r\n\r\n~\\Anaconda3\\lib\\site-packages\\mlflow\\types\\__init__.py in <module>\r\n 4 \"\"\"\r\n 5 \r\n----> 6 from .schema import DataType, ColSpec, Schema, TensorSpec\r\n 7 \r\n 8 __all__ = [\"Schema\", \"ColSpec\", \"DataType\", \"TensorSpec\"]\r\n\r\n~\\Anaconda3\\lib\\site-packages\\mlflow\\types\\schema.py in <module>\r\n 18 \r\n 19 \r\n---> 20 class DataType(Enum):\r\n 21 \"\"\"\r\n 22 MLflow data types.\r\n\r\n~\\Anaconda3\\lib\\site-packages\\mlflow\\types\\schema.py in DataType()\r\n 47 string = (6, np.dtype(\"str\"), \"StringType\", _pandas_string_type())\r\n 48 \"\"\"Text data.\"\"\"\r\n---> 49 binary = (7, np.dtype(\"bytes\"), \"BinaryType\", np.object)\r\n 50 \"\"\"Sequence of raw bytes.\"\"\"\r\n 51 datetime = (8, np.dtype(\"datetime64\"), \"TimestampType\")\r\n\r\n~\\Anaconda3\\lib\\site-packages\\numpy\\__init__.py in __getattr__(attr)\r\n 303 \r\n 304 if attr in __former_attrs__:\r\n--> 305 raise AttributeError(__former_attrs__[attr])\r\n 306 \r\n 307 # Importing Tester requires importing all of UnitTest which is not a\r\n\r\nAttributeError: module 'numpy' has no attribute 'object'.\r\n`np.object` was a deprecated alias for the builtin `object`. To avoid this error in existing code, use `object` by itself. Doing this will not modify any behavior and is safe. \r\nThe aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:\r\n https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations\r\n\r\n",
"Try to set `report_to=\"none\"` in `training_args = TrainingArguments`. Your environment has `mlflow` installed which might use some deprecated `numpy` code. Or you can upgrade your `mflow` versions.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,691 | 1,691 |
NONE
| null |
### System Info
falcon-7b-instruct(url)
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import pipeline
sequences = pipeline(
"Write a poem about Valencia.",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
### Expected behavior
Hi,
while running transformers API models on local machine facing issue of, Failed to import transformers.pipelines because of the following error (look up to see its traceback):
module 'numpy' has no attribute 'object'.
`np.object` was a deprecated alias for the builtin `object`. How to fix this?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24631/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24631/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24630
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24630/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24630/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24630/events
|
https://github.com/huggingface/transformers/issues/24630
| 1,785,223,260 |
I_kwDOCUB6oc5qaFhc
| 24,630 |
Loading GPT-Neo-2.7B has error
|
{
"login": "YIYANGCAI",
"id": 49231152,
"node_id": "MDQ6VXNlcjQ5MjMxMTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/49231152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YIYANGCAI",
"html_url": "https://github.com/YIYANGCAI",
"followers_url": "https://api.github.com/users/YIYANGCAI/followers",
"following_url": "https://api.github.com/users/YIYANGCAI/following{/other_user}",
"gists_url": "https://api.github.com/users/YIYANGCAI/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YIYANGCAI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YIYANGCAI/subscriptions",
"organizations_url": "https://api.github.com/users/YIYANGCAI/orgs",
"repos_url": "https://api.github.com/users/YIYANGCAI/repos",
"events_url": "https://api.github.com/users/YIYANGCAI/events{/privacy}",
"received_events_url": "https://api.github.com/users/YIYANGCAI/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @YIYANGCAI \r\n\r\nDo you have a checkpoint in your local machine with path `/models/gpt-neo-2.7B`?",
"yes, I pre-downloaded it at this path.",
"could you check if you have absolute path `/models/gpt-neo-2.7B` or you intend to use relative path?\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,691 | 1,691 |
NONE
| null |
### System Info
transformers==4.28.1, torch==1.13.1, dgx-A100, python=3.8.15
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
model_name = "/models/gpt-neo-2.7B"
model = AutoModelForCausalLM.from_pretrained(model_name, low_cpu_mem_usage=True)
```
Then I get following bug:
```
OSError: Unable to load weights from pytorch checkpoint file for '/models/gpt-neo-2.7B/pytorch_model.bin' at '/models/gpt-neo-2.7B/pytorch_model.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
```
However, for gpt-neo-125m and gpt-neo-1.3b, no bug occurs.
Could you please help me with this issue? Many thanks!
### Expected behavior
Load the model successfully.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24630/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24630/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24629
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24629/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24629/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24629/events
|
https://github.com/huggingface/transformers/pull/24629
| 1,785,181,856 |
PR_kwDOCUB6oc5UeSA-
| 24,629 |
[`MPT`] Add MosaicML's `MPT` model to transformers
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger we think the PR is ready for a review 🙏 ! The logits tests pass with tolerance `1e-12` between the model on the Hub and ours. There is nothing to do on the Hub as the current code is perfectly backward compatible with their config and weights.\r\n\r\n```python\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM, MptForCausalLM, AutoTokenizer, MptForCausalLM\r\n\r\nmodel_id = \"mosaicml/mpt-7b\"\r\ntok = AutoTokenizer.from_pretrained(model_id)\r\n\r\nmodel = MptForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map={\"\":1}, load_in_4bit=True)\r\nmodel_trust_remote_code = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map={\"\":0}, load_in_4bit=True, trust_remote_code=True)\r\n\r\noutputs_transformers = model(torch.LongTensor([[1, 2, 3, 4, 5]]).to(1), output_hidden_states=True)\r\noutputs_trust_remote_code = model_trust_remote_code(torch.LongTensor([[1, 2, 3, 4, 5]]).to(0))\r\n\r\nprint(torch.allclose(outputs_transformers.logits, outputs_trust_remote_code.logits.to(1), atol=1e-12, rtol=1e-12))\r\n>>> True\r\n```\r\n\r\nCurrently we don't support advanced features such as triton attention or custom init, hence we advise super users that want to use this feature to load the trust_remote_code model if they want to benefit from these features\r\n\r\ncc also @Narsil and @OlivierDehaene for TGI - I think things should work smoothly on your side",
"> cc also @Narsil and @OlivierDehaene for TGI - I think things should work smoothly on your side\r\n\r\nMPT is already supported actually. (No triton, nor flash either, because of alibi)"
] | 1,688 | 1,690 | 1,690 |
COLLABORATOR
| null |
# What does this PR do?
Fixes #23174
First questions:
- [ ] Should we keep the nested config for attention and init configs? Pros: backward, Cons: not what we usually do, can't modify on the fly, harder to maintain
- [ ] Should we keep flash attention or go with better transformers
- [ ] Do we want 100% backward compatibility
# TODOS :
- [ ] Properly setup the config
- [ ] Write a mapping to go from original mosaicml config to new config (since attribute names have to be changed)
- [ ] Design tests, clone the repo to `hf-internal-testing` since at the end we intend to remove the code from the hub. Test attention patterns , flash and trition
- [x] One model on file.
# Notes :
Tokenizer is the same as GPTNeoX, only has a fast version, adds sentinel tokens. We don't really need a custom config for this and should just always have these in the tokenizer config.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24629/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24629/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24629",
"html_url": "https://github.com/huggingface/transformers/pull/24629",
"diff_url": "https://github.com/huggingface/transformers/pull/24629.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24629.patch",
"merged_at": 1690288362000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24628
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24628/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24628/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24628/events
|
https://github.com/huggingface/transformers/issues/24628
| 1,785,081,677 |
I_kwDOCUB6oc5qZi9N
| 24,628 |
[i18n-<languageCode>] Translating docs to <languageName>
|
{
"login": "Everton-12",
"id": 137764672,
"node_id": "U_kgDOCDYfQA",
"avatar_url": "https://avatars.githubusercontent.com/u/137764672?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Everton-12",
"html_url": "https://github.com/Everton-12",
"followers_url": "https://api.github.com/users/Everton-12/followers",
"following_url": "https://api.github.com/users/Everton-12/following{/other_user}",
"gists_url": "https://api.github.com/users/Everton-12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Everton-12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Everton-12/subscriptions",
"organizations_url": "https://api.github.com/users/Everton-12/orgs",
"repos_url": "https://api.github.com/users/Everton-12/repos",
"events_url": "https://api.github.com/users/Everton-12/events{/privacy}",
"received_events_url": "https://api.github.com/users/Everton-12/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
closed
| false | null |
[] |
[
"Hi @Everton-12, Could you make sure to fill in the template for this issue? At the moment there is no specified language. "
] | 1,688 | 1,689 | 1,689 |
NONE
| null |
<!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete)
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/).
## Get Started section
- [ ] [index.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.md) https://github.com/huggingface/transformers/pull/20180
- [ ] [quicktour.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.md) (waiting for initial PR to go through)
- [ ] [installation.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.md).
## Tutorial section
- [ ] [pipeline_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.md)
- [ ] [autoclass_tutorial.md](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.md)
- [ ] [preprocessing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.md)
- [ ] [training.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.md)
- [ ] [accelerate.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.md)
- [ ] [model_sharing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.md)
- [ ] [multilingual.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.md)
<!--
Keep on adding more as you go 🔥
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24628/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24628/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24627
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24627/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24627/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24627/events
|
https://github.com/huggingface/transformers/pull/24627
| 1,784,726,741 |
PR_kwDOCUB6oc5UcshO
| 24,627 |
Create SECURITY.md
|
{
"login": "tarzzii",
"id": 70776116,
"node_id": "MDQ6VXNlcjcwNzc2MTE2",
"avatar_url": "https://avatars.githubusercontent.com/u/70776116?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tarzzii",
"html_url": "https://github.com/tarzzii",
"followers_url": "https://api.github.com/users/tarzzii/followers",
"following_url": "https://api.github.com/users/tarzzii/following{/other_user}",
"gists_url": "https://api.github.com/users/tarzzii/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tarzzii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tarzzii/subscriptions",
"organizations_url": "https://api.github.com/users/tarzzii/orgs",
"repos_url": "https://api.github.com/users/tarzzii/repos",
"events_url": "https://api.github.com/users/tarzzii/events{/privacy}",
"received_events_url": "https://api.github.com/users/tarzzii/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @tarzzii, thanks for opening this PR. \r\n\r\nCould you fill out the PR description please? \r\n\r\nThe third box was checked - but I can't see any link to the relevant discussion. Could you add that too please? \r\nThe final box was checked, but I do not see any tests",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,691 | 1,691 |
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24627/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24627/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24627",
"html_url": "https://github.com/huggingface/transformers/pull/24627",
"diff_url": "https://github.com/huggingface/transformers/pull/24627.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24627.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24626
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24626/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24626/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24626/events
|
https://github.com/huggingface/transformers/issues/24626
| 1,784,668,065 |
I_kwDOCUB6oc5qX9-h
| 24,626 |
Trainer的使用问题
|
{
"login": "fxb392",
"id": 40045460,
"node_id": "MDQ6VXNlcjQwMDQ1NDYw",
"avatar_url": "https://avatars.githubusercontent.com/u/40045460?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxb392",
"html_url": "https://github.com/fxb392",
"followers_url": "https://api.github.com/users/fxb392/followers",
"following_url": "https://api.github.com/users/fxb392/following{/other_user}",
"gists_url": "https://api.github.com/users/fxb392/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxb392/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxb392/subscriptions",
"organizations_url": "https://api.github.com/users/fxb392/orgs",
"repos_url": "https://api.github.com/users/fxb392/repos",
"events_url": "https://api.github.com/users/fxb392/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxb392/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @fxb392 \r\n\r\nIt's mostly for loading the optimizer's scheduler and other states. But it's also convient if you load a canonical model (say from the Hub) while instantiating a trainer but want to use other checkpoints.\r\n\r\nYou don'",
"OK,thanks for you guidence.Does this only means the trainer can conveniently load any checkpoints for train?",
"Yes, but you have to be careful to load the checkpoint which is saved by a trainer that loaded the same model type and the same model configuration.",
"Okay, I understand, thank you again."
] | 1,688 | 1,688 | 1,688 |
NONE
| null |
Trainer在实例化的时候不是已经传入加载的模型了吗,为什么 trainer.train(resume_from_checkpoint=checkpoint)还可以从保存的检查点加载模型????
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24626/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24626/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24625
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24625/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24625/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24625/events
|
https://github.com/huggingface/transformers/pull/24625
| 1,784,597,615 |
PR_kwDOCUB6oc5UcSPJ
| 24,625 |
🌐 [i18n-KO] Translated `model_summary.md` to Korean
|
{
"login": "0525hhgus",
"id": 47289574,
"node_id": "MDQ6VXNlcjQ3Mjg5NTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/47289574?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/0525hhgus",
"html_url": "https://github.com/0525hhgus",
"followers_url": "https://api.github.com/users/0525hhgus/followers",
"following_url": "https://api.github.com/users/0525hhgus/following{/other_user}",
"gists_url": "https://api.github.com/users/0525hhgus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/0525hhgus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/0525hhgus/subscriptions",
"organizations_url": "https://api.github.com/users/0525hhgus/orgs",
"repos_url": "https://api.github.com/users/0525hhgus/repos",
"events_url": "https://api.github.com/users/0525hhgus/events{/privacy}",
"received_events_url": "https://api.github.com/users/0525hhgus/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for translating the `model_summary` file! Feel free to mark as ready for review whenever you're ready 😄 ",
"> Thanks for translating the `model_summary` file! Feel free to mark as ready for review whenever you're ready 😄\r\n\r\nThank you for mentioning it! I changed the status to ready for review. \r\n\r\nMay you please review this PR? 😄 \r\n@sgugger, @ArthurZucker, @eunseojo"
] | 1,688 | 1,691 | 1,691 |
CONTRIBUTOR
| null |
<!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.md` to Korean" 으로 부탁드립니다 -->
# What does this PR do?
Translated the `model_summary.md` file of the documentation to Korean 😄
Thank you in advance for your review!
Part of https://github.com/huggingface/transformers/issues/20179
<!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
<!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
<!-- Team PseudoLab, may you please review this PR? -->
@0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
May you please review this PR?
@sgugger, @ArthurZucker, @eunseojo
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24625/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24625/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24625",
"html_url": "https://github.com/huggingface/transformers/pull/24625",
"diff_url": "https://github.com/huggingface/transformers/pull/24625.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24625.patch",
"merged_at": 1691598447000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24624
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24624/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24624/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24624/events
|
https://github.com/huggingface/transformers/issues/24624
| 1,784,584,099 |
I_kwDOCUB6oc5qXpej
| 24,624 |
LlamaForCausalLM returning prompt without answer
|
{
"login": "leweex95",
"id": 74991597,
"node_id": "MDQ6VXNlcjc0OTkxNTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/74991597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leweex95",
"html_url": "https://github.com/leweex95",
"followers_url": "https://api.github.com/users/leweex95/followers",
"following_url": "https://api.github.com/users/leweex95/following{/other_user}",
"gists_url": "https://api.github.com/users/leweex95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leweex95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leweex95/subscriptions",
"organizations_url": "https://api.github.com/users/leweex95/orgs",
"repos_url": "https://api.github.com/users/leweex95/repos",
"events_url": "https://api.github.com/users/leweex95/events{/privacy}",
"received_events_url": "https://api.github.com/users/leweex95/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @leweex95, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports."
] | 1,688 | 1,688 | 1,688 |
NONE
| null |
### System Info
transformers: 4.30.2
Python: 3.9.17
OS: MacOS 13.3.1 (a)
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I took the below code from the official documentation on the HuggingFace website: https://huggingface.co/openlm-research/open_llama_13b_easylm, and slightly adapted it to match my use case that is information extraction from unstructured text (named entity recognition) using LLMs.
```
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
model_path = 'openlm-research/open_llama_13b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map='auto',
)
prompt = "What are the named entities in the following text: 'The Moon revolves around the Earth for over 4 billion years.'"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=32
)
answer = tokenizer.decode(generation_output[0])
```
This code returns the following output:
`"<s>What are the named entities in the following text: 'The Moon revolves around the Earth for over 4 billion years.'?\nThe named entities in the following text are:\nThe Moon revolves around the Earth over 4 billion years.\nThe Moon revolves around the"`
which is not what I was hoping for. Interestingly, when I use the default example shown in the documentation, i.e.
`prompt = 'Q: What is the largest animal?\nA:'`
I get the following output:
`'<s>Q: What is the largest animal?\nA: A whale.\nQ: What is the largest animal?\nA: A whale.\nQ: What is the largest animal?\nA: A whale'`
which is slightly better, although I don't quite understand how to limit the engine not to keep repeating itself.
### Expected behavior
The expected output would be something like:
`{"ASTRONOMICAL_NAME": "Moon", "ASTRONOMICAL_NAME": "Earth", "PERIOD": "4 billion years"}`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24624/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24624/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24623
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24623/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24623/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24623/events
|
https://github.com/huggingface/transformers/issues/24623
| 1,784,569,744 |
I_kwDOCUB6oc5qXl-Q
| 24,623 |
Hi
|
{
"login": "Justsomeuser88",
"id": 138368056,
"node_id": "U_kgDOCD9UOA",
"avatar_url": "https://avatars.githubusercontent.com/u/138368056?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Justsomeuser88",
"html_url": "https://github.com/Justsomeuser88",
"followers_url": "https://api.github.com/users/Justsomeuser88/followers",
"following_url": "https://api.github.com/users/Justsomeuser88/following{/other_user}",
"gists_url": "https://api.github.com/users/Justsomeuser88/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Justsomeuser88/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Justsomeuser88/subscriptions",
"organizations_url": "https://api.github.com/users/Justsomeuser88/orgs",
"repos_url": "https://api.github.com/users/Justsomeuser88/repos",
"events_url": "https://api.github.com/users/Justsomeuser88/events{/privacy}",
"received_events_url": "https://api.github.com/users/Justsomeuser88/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,688 | 1,688 | 1,688 |
NONE
| null |
Hello
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24623/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24623/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24622
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24622/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24622/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24622/events
|
https://github.com/huggingface/transformers/pull/24622
| 1,784,426,378 |
PR_kwDOCUB6oc5UbvNn
| 24,622 |
[Patch-t5-tokenizer] Patches the changes on T5 to make sure previous behaviour is still valide for beginning of words
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Ran all the tests with `RUN_SLOW`, switch-ci is fixed",
"Until the added tokens are fixed, this will break the slow version that use extra ids, because by default we strip left and right..... So buggy ",
"The bug has always been in T5, but since some models were trained with the bugged T5, we will let the user decide whether or not they incorparate the change"
] | 1,688 | 1,689 | 1,689 |
COLLABORATOR
| null |
# What does this PR do?
There was a small typo that modified the behaviour in #24565, the test were not able to catch it. #24569
When a sentence does not start with a space, a space was added.
Before:
```python
>>>tokenizer.tokenize("Hello <extra_id_0>")
['_', '_Hello', '<extra_id_0>']
```
After:
```python
>>>tokenizer.tokenize("Hello <extra_id_0>")
['_Hello', '<extra_id_0>']
```
# Big bug 35 models involved
Not only punctuation but anything after a special token is basically wrong... Let's ignore the fact that we also split when it's the beginning of word less important
<img width="1020" alt="image" src="https://github.com/huggingface/transformers/assets/48595927/d805bd21-4f2a-411b-ad2b-754f4f69517c">
Tests were added as they were green before merging
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24622/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24622/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24622",
"html_url": "https://github.com/huggingface/transformers/pull/24622",
"diff_url": "https://github.com/huggingface/transformers/pull/24622.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24622.patch",
"merged_at": 1689080541000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24621
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24621/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24621/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24621/events
|
https://github.com/huggingface/transformers/pull/24621
| 1,784,359,758 |
PR_kwDOCUB6oc5Ubgg6
| 24,621 |
Pop
|
{
"login": "jamesthesnake",
"id": 8227820,
"node_id": "MDQ6VXNlcjgyMjc4MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8227820?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesthesnake",
"html_url": "https://github.com/jamesthesnake",
"followers_url": "https://api.github.com/users/jamesthesnake/followers",
"following_url": "https://api.github.com/users/jamesthesnake/following{/other_user}",
"gists_url": "https://api.github.com/users/jamesthesnake/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jamesthesnake/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jamesthesnake/subscriptions",
"organizations_url": "https://api.github.com/users/jamesthesnake/orgs",
"repos_url": "https://api.github.com/users/jamesthesnake/repos",
"events_url": "https://api.github.com/users/jamesthesnake/events{/privacy}",
"received_events_url": "https://api.github.com/users/jamesthesnake/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,688 | 1,688 | 1,688 |
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24621/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24621/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24621",
"html_url": "https://github.com/huggingface/transformers/pull/24621",
"diff_url": "https://github.com/huggingface/transformers/pull/24621.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24621.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24620
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24620/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24620/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24620/events
|
https://github.com/huggingface/transformers/issues/24620
| 1,784,218,346 |
I_kwDOCUB6oc5qWQLq
| 24,620 |
BART is not found 404
|
{
"login": "mahdiabdollahpour",
"id": 27064237,
"node_id": "MDQ6VXNlcjI3MDY0MjM3",
"avatar_url": "https://avatars.githubusercontent.com/u/27064237?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mahdiabdollahpour",
"html_url": "https://github.com/mahdiabdollahpour",
"followers_url": "https://api.github.com/users/mahdiabdollahpour/followers",
"following_url": "https://api.github.com/users/mahdiabdollahpour/following{/other_user}",
"gists_url": "https://api.github.com/users/mahdiabdollahpour/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mahdiabdollahpour/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mahdiabdollahpour/subscriptions",
"organizations_url": "https://api.github.com/users/mahdiabdollahpour/orgs",
"repos_url": "https://api.github.com/users/mahdiabdollahpour/repos",
"events_url": "https://api.github.com/users/mahdiabdollahpour/events{/privacy}",
"received_events_url": "https://api.github.com/users/mahdiabdollahpour/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Back up now! 🤗 "
] | 1,688 | 1,688 | 1,688 |
NONE
| null |
pages for BART models are not responding
e.g:
https://huggingface.co/facebook/bart-large-cnn
https://huggingface.co/facebook/bart-base
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24620/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24620/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24619
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24619/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24619/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24619/events
|
https://github.com/huggingface/transformers/issues/24619
| 1,784,194,190 |
I_kwDOCUB6oc5qWKSO
| 24,619 |
AutoTokenizer always tries to download from hub even if the model is cached. Thus it fails to run when running in an docker environment without SSL.
|
{
"login": "kalpesh22-21",
"id": 61782478,
"node_id": "MDQ6VXNlcjYxNzgyNDc4",
"avatar_url": "https://avatars.githubusercontent.com/u/61782478?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kalpesh22-21",
"html_url": "https://github.com/kalpesh22-21",
"followers_url": "https://api.github.com/users/kalpesh22-21/followers",
"following_url": "https://api.github.com/users/kalpesh22-21/following{/other_user}",
"gists_url": "https://api.github.com/users/kalpesh22-21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kalpesh22-21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kalpesh22-21/subscriptions",
"organizations_url": "https://api.github.com/users/kalpesh22-21/orgs",
"repos_url": "https://api.github.com/users/kalpesh22-21/repos",
"events_url": "https://api.github.com/users/kalpesh22-21/events{/privacy}",
"received_events_url": "https://api.github.com/users/kalpesh22-21/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey! Thanks for reporting. Before diving a bit deeper, there is a `local_files_only` argument that you can set when calling from pretrained, which activated the `offline mode`. You can also set it using `TRANSFORMERS_OFFLINE=1`. Can you try with this? It was designed for specific cases like this one! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,691 | 1,691 |
NONE
| null |
### System Info
python=3.9
transformers=4.30.2
### Who can help?
@ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Run Autotokenizer.frompretrained("path_to_cached_snapshot_directory")
Will throw an SLL error because of no internet connection
Error:
requests.exceptions.SSLError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /xlm-roberta-large/resolve/main/tokenizer_config.json (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1129)')))
problem found in /transformers/utils/hub.py file:
Method def cached_file #line 300
Problem is possibly here:
#line 401
if _commit_hash is not None and not force_download:
# If the file is cached under that commit hash, we return it directly.
resolved_file = try_to_load_from_cache(
path_or_repo_id, full_filename, cache_dir=cache_dir, revision=_commit_hash, repo_type=repo_type
)
if resolved_file is not None:
if resolved_file is not _CACHED_NO_EXIST:
return resolved_file
elif not _raise_exceptions_for_missing_entries:
return None
else:
raise EnvironmentError(f"Could not locate {full_filename} inside {path_or_repo_id}.")
The script only try to load from cache if it has a _commit_hash provided which will not be the case in the example above.
I tried to do solve this internally this might help:
#line 401
if not force_download:
# If the file is cached under that commit hash, we return it directly.
resolved_file = try_to_load_from_cache(
path_or_repo_id, full_filename, cache_dir=cache_dir, revision=_commit_hash, repo_type=repo_type
)
if resolved_file is not None:
if resolved_file is not _CACHED_NO_EXIST:
return resolved_file
elif not _raise_exceptions_for_missing_entries:
return None
elif _commit_hash is not None:
raise EnvironmentError(f"Could not locate {full_filename} inside {path_or_repo_id}.")
### Expected behavior
It should not download already cached file.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24619/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24619/timeline
|
completed
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.