url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
list
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/24315
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24315/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24315/comments
https://api.github.com/repos/huggingface/transformers/issues/24315/events
https://github.com/huggingface/transformers/issues/24315
1,760,134,393
I_kwDOCUB6oc5o6YT5
24,315
CUDA out of memory when use DistillBert for inference and use hidden_state as input_embeds
{ "login": "TOP-RX", "id": 103393767, "node_id": "U_kgDOBimp5w", "avatar_url": "https://avatars.githubusercontent.com/u/103393767?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TOP-RX", "html_url": "https://github.com/TOP-RX", "followers_url": "https://api.github.com/users/TOP-RX/followers", "following_url": "https://api.github.com/users/TOP-RX/following{/other_user}", "gists_url": "https://api.github.com/users/TOP-RX/gists{/gist_id}", "starred_url": "https://api.github.com/users/TOP-RX/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TOP-RX/subscriptions", "organizations_url": "https://api.github.com/users/TOP-RX/orgs", "repos_url": "https://api.github.com/users/TOP-RX/repos", "events_url": "https://api.github.com/users/TOP-RX/events{/privacy}", "received_events_url": "https://api.github.com/users/TOP-RX/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @TOP-RX, \r\n\r\nA few things from first looking at your script: \r\n\r\n* tokenizers already work on batches. There's no need to pass line by line and then concatenate\r\n\r\n```python\r\nencoder_inputs = tokenizer(node_text_list, max_length=32, truncation=True, return_tensors=\"pt\")\r\ninput_ids = encoder_inputs[\"input_ids\"]\r\nattention_mask = encoder_inputs[\"attention_mask\"]\r\n```\r\n\r\n* Creating the dataset like this means that ALL of your data is read into memory (although not GPU) and converted to pytorch arrays at once. Consider loading using `datasets` some [info here](https://huggingface.co/docs/datasets/nlp_load) and tokenizing as a map function applied to the dataset. e.g. [like here](https://huggingface.co/docs/datasets/nlp_process). I highly recommend looking at the [scripts in `examples`](https://github.com/huggingface/transformers/tree/main/examples/pytorch) to see how best to structure training pipelines. \r\n\r\n* Batch size\r\nYou mention it fails at any batch size - is this true for batch_size=1? The batch size in this script (128) is quite large\r\n\r\n* Increasing memory usage\r\nDoes this happen if you just do one forward pass i.e. passing a single batch with no for loop? \r\nI'd guess some of the memory issues are coming from `hidden_states_list`, which increases in size and the tensors I believe are still on the cuda device. ", "Hi @amyeroberts ,\r\n\r\nThanks so much for your reply and advices! I was following your suggestion and found the issue happened with `hidden_states_list.append(hidden_states)` even I used batch size =1, the GPU memory accumulates for each batch to cause the out of memory problem after several batches.\r\n\r\nI am also wondering if I could have some suggestions from you about another issue(ignore some unrelated parts):\r\n```\r\nclass Net(nn.Module):\r\n def forward(hidden_state, mask):\r\n distilbert_output = self.distilbert(inputs_embeds=hidden_state, attention_mask=mask, return_dict=False)\r\n hidden_state = distilbert_output[0] \r\n pooled_output = hidden_state[:, 0] \r\n x = pooled_output\r\n ### classification layers\r\n\r\n# Remove unnecessary layers from BERT\r\nmodel = Net()\r\nnum_removed_layers = 1 # Specify the number of layers to remove\r\nencoder_layers = model.distilbert.transformer.layer[-num_removed_layers:]\r\nmodel.distilbert.transformer.layer = nn.ModuleList(encoder_layers)\r\n```\r\n1. Since I just want to use previous layers as encoder not tuning, and only tune the last layer (as code shown, I remove the layers which were used to generate the hidden states)plus my own model to save memory, Is this a correct way to directly use hidden_state I got as inputs_embeds for a Bert/DistillBert? And it seems if I use input_ids is fine but when I use `inputs_embeds=hidden_state` with the same settings, it will be out of memory. \r\n2. I found no matter how many layers I removed, the GPU memory usage is almost same, it this normal?\r\n\r\nThanks so much!", "@TOP-RX \r\n\r\n> I was following your suggestion and found the issue happened with hidden_states_list.append(hidden_states) even I used batch size =1\r\n\r\nOK, this indicates the issue is a result of the script and not code relating to transformers. \r\nQuestions about debugging custom training objectives or scripts are best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.\r\n\r\nRegarding your questions: \r\n\r\n1. Same as above - this is a question for our forums :) \r\n2. Without more specifics about the model, GPU utilisation vs. layers and what \"almost the same\" means, it's not possible to help. If you suspect this is a bug, then please open another separate issue giving information about how to reproduce and expected behaviour. ", "You are collecting the outputs (here `hidden_states_list`) for a whole dataset (e.g. `for input_id, mask in pre_loader:`). This is not going to work well. You should find a way to save it to some storage like disk with some tools (probably there is some in torch).", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,686
1,690
1,690
NONE
null
### System Info transformer: 4.24.0 python: 3.8.13 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Hello, since the memory cost of tuning the whole distillbert is still large for me, I am trying to get the last hidden states of a modified distillbert which only has first 5 layers, then use it as the input for a distillbert which only has the last 1 transformer layers and fine tune it (only fine tune the last transformer layer to save memory). **But when I try to get the hidden_states, it will always be out of memory whatever batch size I use.** Since I can use my current GPU to run even Bertseqclassfiication, I think the GPU memory is sufficient. Could I have some help about this? Besides, **I am not sure if I can use hidden_state directly as input_embeds for another distillbert (in my case, it's a one transformer layer distillbert)** I want to fine tune, here is my current code, please let me know if I am not correct: ``` with open('/X.all.txt', "r") as fin: node_text_list = fin.readlines() model_path = r'/distilbert-base-uncased' tokenizer = DistilBertTokenizer.from_pretrained(model_path) X = [tokenizer(text, padding='max_length', max_length=32, truncation=True, return_tensors="pt") for text in node_text_list]` input_ids = [] attention_mask = [] for i in range(len(X)): input_ids.append(X[i]['input_ids']) attention_mask.append(X[i]['attention_mask']) input_ids = torch.stack(input_ids).squeeze(1) attention_mask = torch.stack(attention_mask).squeeze(1) data_set = torch.utils.data.TensorDataset( input_ids, attention_mask ) # Load the first BERT model pre_loader = torch.utils.data.DataLoader( data_set, batch_size=128, num_workers=0, pin_memory=True ) model_pretrain = DistilBertModel.from_pretrained(args.pretrain) model_pretrain.transformer.layer = model_pretrain.transformer.layer[:5] hidden_states_list = [] model_pretrain = model_pretrain.to(device) for param in model_pretrain.parameters(): param.requires_grad = False # Pass the input through the first BERT model for input_id, mask in pre_loader: with torch.no_grad(): input_id = input_id.to(device) mask = mask.to(device) outputs = model_pretrain(input_id, mask, return_dict=False) ###out of memory here hidden_states = outputs[0] # Remove the memory usage of the first BERT model del outputs # Append the hidden states to the list hidden_states_list.append(hidden_states) # Concatenate the hidden states along the batch dimension pretrain_hidden_states = torch.cat(hidden_states_list, dim=0) model = DistilBertModel.from_pretrained(pretrain) # Remove unnecessary layers from BERT num_removed_layers = 1 # Specify the number of layers to remove encoder_layers = model.transformer.layer[-num_removed_layers:] model.transformer.layer = nn.ModuleList(encoder_layers) ### new sampling train_set = torch.utils.data.TensorDataset( pretrain_hidden_states, data.attention_mask ) train_loader = torch.utils.data.DataLoader( train_set, batch_size=128, num_workers=0, pin_memory=True ) for hidden_state, mask in train_loader: distilbert_output = model(inputs_embeds=hidden_state, attention_mask=mask, return_dict=False) hidden_state = distilbert_output[0] pooled_output = hidden_state[:, 0] x = pooled_output ### classification layers ``` ### Expected behavior CUDA should not be out of memory
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24315/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24315/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24314
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24314/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24314/comments
https://api.github.com/repos/huggingface/transformers/issues/24314/events
https://github.com/huggingface/transformers/issues/24314
1,759,975,827
I_kwDOCUB6oc5o5xmT
24,314
Device control characters lead to an error in average NER aggregation
{ "login": "edloginova", "id": 8640374, "node_id": "MDQ6VXNlcjg2NDAzNzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/8640374?v=4", "gravatar_id": "", "url": "https://api.github.com/users/edloginova", "html_url": "https://github.com/edloginova", "followers_url": "https://api.github.com/users/edloginova/followers", "following_url": "https://api.github.com/users/edloginova/following{/other_user}", "gists_url": "https://api.github.com/users/edloginova/gists{/gist_id}", "starred_url": "https://api.github.com/users/edloginova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/edloginova/subscriptions", "organizations_url": "https://api.github.com/users/edloginova/orgs", "repos_url": "https://api.github.com/users/edloginova/repos", "events_url": "https://api.github.com/users/edloginova/events{/privacy}", "received_events_url": "https://api.github.com/users/edloginova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @Narsil @ArthurZucker ", "Created a fix PR." ]
1,686
1,687
1,687
NONE
null
### System Info Python 3.8.10, transformers versions tried: 4.30.1 and 4.30.2. Tried in a Docker container with Linux Ubuntu 20.04 and on Google Colab. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. Load a pretrained language model as NER pipeline with average aggregation 2. Run it on a sequence `"\x11\x11"` Colab: https://colab.research.google.com/drive/1Xm2kFAIsb1vt8R8JvdiLitDZ2m6WszcP?usp=sharing The error happens on line 336 of `transformers\pipelines\token_classification.py`, in `aggregate_word()` function: `word = self.tokenizer.convert_tokens_to_string([entity["word"] for entity in entities])`. There, entities remains `None` while it should be `[]`; it seems to happen because in `aggregate()` function (same file), when `aggregation_strategy` is set to average, we call `aggregate_words()` and there `entities` is `[]`, so we end up with `word_group` as still `None`, and that gets passed to `aggregate_word`. ### Expected behavior The aggregations runs without any errors and produces an empty list.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24314/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24314/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24313
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24313/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24313/comments
https://api.github.com/repos/huggingface/transformers/issues/24313/events
https://github.com/huggingface/transformers/issues/24313
1,759,958,299
I_kwDOCUB6oc5o5tUb
24,313
Deepspeed ZeRO2 + Trainer does not resume training after evaluation
{ "login": "larrylawl", "id": 40198156, "node_id": "MDQ6VXNlcjQwMTk4MTU2", "avatar_url": "https://avatars.githubusercontent.com/u/40198156?v=4", "gravatar_id": "", "url": "https://api.github.com/users/larrylawl", "html_url": "https://github.com/larrylawl", "followers_url": "https://api.github.com/users/larrylawl/followers", "following_url": "https://api.github.com/users/larrylawl/following{/other_user}", "gists_url": "https://api.github.com/users/larrylawl/gists{/gist_id}", "starred_url": "https://api.github.com/users/larrylawl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/larrylawl/subscriptions", "organizations_url": "https://api.github.com/users/larrylawl/orgs", "repos_url": "https://api.github.com/users/larrylawl/repos", "events_url": "https://api.github.com/users/larrylawl/events{/privacy}", "received_events_url": "https://api.github.com/users/larrylawl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "When I killed the process, it gives the log \r\n\r\n```\r\n^[[A^[[A^[[A^[[A^[[A^C[2023-06-16 08:32:00,972] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 1947071\r\nTraceback (most recent call last):\r\n File \"/home/users/industry/dso/lannliat/.local/bin/deepspeed\", line 6, in <module>\r\n[2023-06-16 08:32:01,072] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 1947071\r\n main() \r\n File \"/home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/deepspeed/launcher/runner.py\", line 565, in main\r\n result.wait()\r\n File \"/usr/lib/python3.10/subprocess.py\", line 1207, in wait\r\n return self._wait(timeout=timeout)\r\n File \"/usr/lib/python3.10/subprocess.py\", line 1941, in _wait\r\n (pid, sts) = self._try_wait(0)\r\n File \"/usr/lib/python3.10/subprocess.py\", line 1899, in _try_wait\r\n (pid, sts) = os.waitpid(self.pid, wait_flags)\r\n```\r\n\r\nSeems like the process is waiting.", "cc @pacman100 ", "Hello @larrylawl, can you provide a minimal reproducible example? The above example is very involved with a lot of dependencies like flash-attn ... \r\n\r\nWhen I run the following official example, everything is working fine:\r\n\r\n```\r\ncd transformers\r\nexport TASK_NAME=mrpc\r\nexport CUDA_VISIBLE_DEVICES=\"0,1\"\r\n\r\ntorchrun --nnodes 1 --nproc-per-node 2 ./examples/pytorch/text-classification/run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 16 --learning_rate 5e-5 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --deepspeed ~/transformers/tests/deepspeed/ds_config_zero2.json --save_steps 10 --evaluation_strategy \"steps\" --eval_steps 10\r\n```\r\n\r\noutput:\r\n```\r\n[INFO|trainer.py:1682] 2023-06-16 13:14:12,536 >> ***** Running training *****\r\n[INFO|trainer.py:1683] 2023-06-16 13:14:12,536 >> Num examples = 3,668\r\n[INFO|trainer.py:1684] 2023-06-16 13:14:12,536 >> Num Epochs = 3\r\n[INFO|trainer.py:1685] 2023-06-16 13:14:12,536 >> Instantaneous batch size per device = 16\r\n[INFO|trainer.py:1686] 2023-06-16 13:14:12,536 >> Total train batch size (w. parallel, distributed & accumulation) = 32\r\n[INFO|trainer.py:1687] 2023-06-16 13:14:12,536 >> Gradient Accumulation steps = 1\r\n[INFO|trainer.py:1688] 2023-06-16 13:14:12,536 >> Total optimization steps = 345\r\n[INFO|trainer.py:1689] 2023-06-16 13:14:12,537 >> Number of trainable parameters = 108,311,810\r\n[INFO|integrations.py:727] 2023-06-16 13:14:12,540 >> Automatic Weights & Biases logging enabled, to disable set os.environ[\"WANDB_DISABLED\"] = \"true\"\r\nwandb: Currently logged in as: smangrul. Use `wandb login --relogin` to force relogin\r\nwandb: wandb version 0.15.4 is available! To upgrade, please run:\r\nwandb: $ pip install wandb --upgrade\r\nwandb: Tracking run with wandb version 0.13.3\r\nwandb: Run data is saved locally in /home/sourab/transformers/wandb/run-20230616_131413-2fg1dtqg\r\nwandb: Run `wandb offline` to turn off syncing.\r\nwandb: Syncing run distinctive-puddle-305\r\nwandb: ⭐️ View project at https://wandb.ai/smangrul/huggingface\r\nwandb: 🚀 View run at https://wandb.ai/smangrul/huggingface/runs/2fg1dtqg\r\n 3%|█▊ | 10/345 [00:01<01:02, 5.40it/s][INFO|trainer.py:773] 2023-06-16 13:14:21,035 >> The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: idx, sentence1, sentence2. If idx, sentence1, sentence2 are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.\r\n[INFO|trainer.py:3079] 2023-06-16 13:14:21,037 >> ***** Running Evaluation *****\r\n[INFO|trainer.py:3081] 2023-06-16 13:14:21,037 >> Num examples = 408\r\n[INFO|trainer.py:3084] 2023-06-16 13:14:21,037 >> Batch size = 8\r\n{'eval_loss': 0.6776408553123474, 'eval_accuracy': 0.6838235294117647, 'eval_f1': 0.8122270742358079, 'eval_combined_score': 0.7480253018237863, 'eval_runtime': 0.5202, 'eval_samples_per_second': 784.331, 'eval_steps_per_second': 49.982, 'epoch': 0.09}\r\n 3%|█▊ | 10/345 [00:02<01:02, 5.40it/s[INFO|trainer.py:2805] 2023-06-16 13:14:21,560 >> Saving model checkpoint to /tmp/mrpc/checkpoint-10 \r\n[INFO|configuration_utils.py:458] 2023-06-16 13:14:21,561 >> Configuration saved in /tmp/mrpc/checkpoint-10/config.json\r\n[INFO|modeling_utils.py:1844] 2023-06-16 13:14:22,280 >> Model weights saved in /tmp/mrpc/checkpoint-10/pytorch_model.bin\r\n[INFO|tokenization_utils_base.py:2194] 2023-06-16 13:14:22,280 >> tokenizer config file saved in /tmp/mrpc/checkpoint-10/tokenizer_config.json\r\n[INFO|tokenization_utils_base.py:2201] 2023-06-16 13:14:22,281 >> Special tokens file saved in /tmp/mrpc/checkpoint-10/special_tokens_map.json\r\n[2023-06-16 13:14:22,308] [INFO] [logging.py:96:log_dist] [Rank 0] [Torch] Checkpoint global_step10 is about to be saved!\r\n/home/sourab/miniconda3/envs/ml/lib/python3.10/site-packages/torch/nn/modules/module.py:1802: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.\r\n warnings.warn(\r\n/home/sourab/miniconda3/envs/ml/lib/python3.10/site-packages/torch/nn/modules/module.py:1802: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.\r\n warnings.warn(\r\n[2023-06-16 13:14:22,314] [INFO] [logging.py:96:log_dist] [Rank 0] Saving model checkpoint: /tmp/mrpc/checkpoint-10/global_step10/mp_rank_00_model_states.pt\r\n[2023-06-16 13:14:22,314] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /tmp/mrpc/checkpoint-10/global_step10/mp_rank_00_model_states.pt...\r\n[2023-06-16 13:14:23,319] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /tmp/mrpc/checkpoint-10/global_step10/mp_rank_00_model_states.pt.\r\n[2023-06-16 13:14:23,320] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /tmp/mrpc/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_optim_states.pt...\r\n[2023-06-16 13:14:26,180] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /tmp/mrpc/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_optim_states.pt.\r\n[2023-06-16 13:14:26,181] [INFO] [engine.py:3228:_save_zero_checkpoint] zero checkpoint saved /tmp/mrpc/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_optim_states.pt\r\n[2023-06-16 13:14:26,181] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step10 is ready now!\r\n 6%|███▌ | 20/345 [00:08<01:20, 4.06it/s][INFO|trainer.py:773] 2023-06-16 13:14:28,075 >> The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: idx, sentence1, sentence2. If idx, sentence1, sentence2 are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.\r\n[INFO|trainer.py:3079] 2023-06-16 13:14:28,077 >> ***** Running Evaluation *****\r\n[INFO|trainer.py:3081] 2023-06-16 13:14:28,077 >> Num examples = 408\r\n[INFO|trainer.py:3084] 2023-06-16 13:14:28,077 >> Batch size = 8\r\n{'eval_loss': 0.6481188535690308, 'eval_accuracy': 0.6838235294117647, 'eval_f1': 0.8122270742358079, 'eval_combined_score': 0.7480253018237863, 'eval_runtime': 0.5179, 'eval_samples_per_second': 787.721, 'eval_steps_per_second': 50.198, 'epoch': 0.17}\r\n \r\n...\r\n\r\n\r\n52%|███████████████████████████████▊ | 180/345 [02:11<00:44, 3.72it/s][INFO|trainer.py:773] 2023-06-16 13:16:30,237 >> The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: idx, sentence1, sentence2. If idx, sentence1, sentence2 are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.\r\n[INFO|trainer.py:3079] 2023-06-16 13:16:30,239 >> ***** Running Evaluation *****\r\n[INFO|trainer.py:3081] 2023-06-16 13:16:30,239 >> Num examples = 408\r\n[INFO|trainer.py:3084] 2023-06-16 13:16:30,239 >> Batch size = 8\r\n{'eval_loss': 0.36363303661346436, 'eval_accuracy': 0.8186274509803921, 'eval_f1': 0.8673835125448028, 'eval_combined_score': 0.8430054817625975, 'eval_runtime': 0.5171, 'eval_samples_per_second': 789.05, 'eval_steps_per_second': 50.283, 'epoch': 1.57}\r\n\r\n```\r\n\r\n\r\n", "I encouter the same problem. Even with deepspeed and FSDP.\r\nIt feels like when model save its weights and stuck.", "Hello @Ricardokevins, please provide a minimal reproducible example. I clearly show above that things are working fine\r\n", "@Ricardokevins I hypothesise that it's a flash attention issue. It works fine with deepspeed only (for my case) and fsdp only (for @pacman100 )", "> @Ricardokevins I hypothesise that it's a flash attention issue. It works fine with deepspeed only (for my case) and fsdp only (for @pacman100 )\r\n\r\nThis issue may be quite complex and unusual. Initially, when I was training the code in Alpaca-Lora using DeepSpeed, I encountered this problem (training got stuck, and the GPU utilization of some GPUs remained at 0%). This was before using Flash-attn.\r\n\r\nSubsequently, I started training the code in FastChat using FSDP (which includes Flash-attn), and encountered similar issues.\r\n\r\nYesterday, I reinstalled all the environments, replaced the code with flash-attn from the FastChat issue, and started training using DeepSpeed. So far, I haven't encountered any problems.\r\nCurrently, I'm still unsure about the root cause of the issue, as I haven't faced it recently. If I encounter it again in the future, I will continue the discussion and seek your assistance. Thank you.", "@Ricardokevins Oh nice that you fixed it! Can I ask for some advice since I'm still facing the issue:\r\n- What do you mean by \"replaced the code with flash-attn from the FastChat issue\"? Did you mean [this patch from FastChat](https://github.com/lm-sys/FastChat/blob/main/fastchat/train/llama_flash_attn_monkey_patch.py)?\r\n- What's the cuda version of your system and environment? If you used a docker image, can you share which one worked for you?\r\n- Do you mind sharing your training loss curves? I'm facing a strange issue where my deepspeed + flash attention setting yielded very volatile curves...\r\n\r\n![image](https://github.com/huggingface/transformers/assets/40198156/e86acd43-4a4b-4587-9a5d-b56a3487ad04)\r\n\r\nBut FSDP + flash attention yielded smoother curves\r\n\r\n![image](https://github.com/huggingface/transformers/assets/40198156/772f8857-bdd3-4821-b5c4-6e1e73826858)\r\n", "> @Ricardokevins Oh nice that you fixed it! Can I ask for some advice since I'm still facing the issue:\r\n> \r\n> * What do you mean by \"replaced the code with flash-attn from the FastChat issue\"? Did you mean [this patch from FastChat](https://github.com/lm-sys/FastChat/blob/main/fastchat/train/llama_flash_attn_monkey_patch.py)?\r\n> * What's the cuda version of your system and environment? If you used a docker image, can you share which one worked for you?\r\n> * Do you mind sharing your training loss curves? I'm facing a strange issue where my deepspeed + flash attention setting yielded very volatile curves...\r\n> \r\n> ![image](https://user-images.githubusercontent.com/40198156/249803109-e86acd43-4a4b-4587-9a5d-b56a3487ad04.png)\r\n> \r\n> But FSDP + flash attention yielded smoother curves\r\n> \r\n> ![image](https://user-images.githubusercontent.com/40198156/249803545-772f8857-bdd3-4821-b5c4-6e1e73826858.png)\r\n\r\n1. i use the code from here: https://github.com/lm-sys/FastChat/commit/3adc92d405038d316a3cb908886261231b058590?diff=split\r\n2. cuda version 11.7\r\n<img width=\"862\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/43642508/7b9bf3a7-98ea-4972-9142-ee25325575c2\">\r\n", "Thanks @Ricardokevins ! Btw I've fixed the issue by setting number of threads used for intraop parallelism to 1\r\n```\r\ntorch.set_num_threads(1) \r\n```\r\n\r\nThis [thread](https://discuss.pytorch.org/t/cpu-usage-far-too-high-and-training-inefficient/57228) explains why the above works.\r\n\r\nAlso, the Vicuna repo now supports [xformers](https://github.com/lm-sys/FastChat/pull/1255). FYI", "> Thanks @Ricardokevins ! Btw I've fixed the issue by setting number of threads used for intraop parallelism to 1\r\n> \r\n> ```\r\n> torch.set_num_threads(1) \r\n> ```\r\n> \r\n> This [thread](https://discuss.pytorch.org/t/cpu-usage-far-too-high-and-training-inefficient/57228) explains why the above works.\r\n> \r\n> Also, the Vicuna repo now supports [xformers](https://github.com/lm-sys/FastChat/pull/1255). FYI\r\n\r\nwow, i will try this if I encounter the problem again!", "> Thanks @Ricardokevins ! Btw I've fixed the issue by setting number of threads used for intraop parallelism to 1\r\n> \r\n> ```\r\n> torch.set_num_threads(1) \r\n> ```\r\n> \r\n> This [thread](https://discuss.pytorch.org/t/cpu-usage-far-too-high-and-training-inefficient/57228) explains why the above works.\r\n> \r\n> Also, the Vicuna repo now supports [xformers](https://github.com/lm-sys/FastChat/pull/1255). FYI\r\n\r\nHey,i try to use it,but it doesn't work" ]
1,686
1,688
1,688
NONE
null
### System Info - `transformers` version: 4.30.2 - Platform: Linux-4.18.0-305.25.1.el8_4.x86_64-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Parallel ### Who can help? @pacman, @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Steps to reproduce: 1. Use the [pytorch container from Nvidia](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch). 2. Pip install missing dependencies (IIRC: flash-attn, deepspeed, einops, transformers, accelerate). Note that for deepspeed, I had to use this [specific PR](https://github.com/microsoft/DeepSpeed/issues/3678) 3. Download ShareGPT dataset from huggingface [here](https://huggingface.co/datasets/Aeala/ShareGPT_Vicuna_unfiltered) 4. Run the script of my codebase [here](https://github.com/larrylawl/FastChat/blob/main/scripts/train_vicuna_13b_ds_debug.sh). You'll need to edit the filepaths. My codebase is a fork of FastChat. ### Expected behavior As you can see from my [log file](https://github.com/huggingface/transformers/files/11766324/train.log), the training gets stuck after coming out of evaluation. I expected the training script to continue.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24313/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24313/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24312
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24312/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24312/comments
https://api.github.com/repos/huggingface/transformers/issues/24312/events
https://github.com/huggingface/transformers/pull/24312
1,759,703,885
PR_kwDOCUB6oc5TJH9T
24,312
Add stride/chunking to `TextClassificationPipeline`
{ "login": "boyleconnor", "id": 6520892, "node_id": "MDQ6VXNlcjY1MjA4OTI=", "avatar_url": "https://avatars.githubusercontent.com/u/6520892?v=4", "gravatar_id": "", "url": "https://api.github.com/users/boyleconnor", "html_url": "https://github.com/boyleconnor", "followers_url": "https://api.github.com/users/boyleconnor/followers", "following_url": "https://api.github.com/users/boyleconnor/following{/other_user}", "gists_url": "https://api.github.com/users/boyleconnor/gists{/gist_id}", "starred_url": "https://api.github.com/users/boyleconnor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/boyleconnor/subscriptions", "organizations_url": "https://api.github.com/users/boyleconnor/orgs", "repos_url": "https://api.github.com/users/boyleconnor/repos", "events_url": "https://api.github.com/users/boyleconnor/events{/privacy}", "received_events_url": "https://api.github.com/users/boyleconnor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I happened to implement this for another project, so I went straight to implementing a PR here (skipping opening an issue) to see if this would be a desirable feature for HF. I realized after opening this PR that I haven't finished updating the docstrings where appropriate.\r\n\r\nI could also use some guidance on when & where to return the result as a list vs. a list-of-lists; I'll admit I worked out [this section of code](https://github.com/huggingface/transformers/blob/52253ed1b1722a8e60e2df29ce0b1339dce07d9f/src/transformers/pipelines/text_classification.py#L222) mainly through trial-and-error with the existing test suite--not an ideal way to program.\r\n\r\nMuch of this code is copied/adapted from `TokenClassificationPipeline`. It looks like the changes also broke the legacy functionality of text pairs as shown in tests such as [`tests/models/bert/test_modeling_tf_bert.py::TFBertModelTest::test_pipeline_text_classification`](https://app.circleci.com/pipelines/github/huggingface/transformers/66591/workflows/85501b27-19f8-4dfe-a24e-242592463a89/jobs/829459); I think I can fix that if this new functionality turns out to be something the HuggingFace team wants to add.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24312). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,686
1,690
1,690
CONTRIBUTOR
null
# What does this PR do? Adds sliding window/chunking functionality to `TextClassificationPipeline` (similar to what #21771 did for `TokenClassificationPipeline`). ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @Narsil
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24312/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24312/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24312", "html_url": "https://github.com/huggingface/transformers/pull/24312", "diff_url": "https://github.com/huggingface/transformers/pull/24312.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24312.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24311
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24311/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24311/comments
https://api.github.com/repos/huggingface/transformers/issues/24311/events
https://github.com/huggingface/transformers/pull/24311
1,759,679,692
PR_kwDOCUB6oc5TJC1q
24,311
Update MMS integration docs
{ "login": "vineelpratap", "id": 5282102, "node_id": "MDQ6VXNlcjUyODIxMDI=", "avatar_url": "https://avatars.githubusercontent.com/u/5282102?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vineelpratap", "html_url": "https://github.com/vineelpratap", "followers_url": "https://api.github.com/users/vineelpratap/followers", "following_url": "https://api.github.com/users/vineelpratap/following{/other_user}", "gists_url": "https://api.github.com/users/vineelpratap/gists{/gist_id}", "starred_url": "https://api.github.com/users/vineelpratap/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vineelpratap/subscriptions", "organizations_url": "https://api.github.com/users/vineelpratap/orgs", "repos_url": "https://api.github.com/users/vineelpratap/repos", "events_url": "https://api.github.com/users/vineelpratap/events{/privacy}", "received_events_url": "https://api.github.com/users/vineelpratap/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for the doc updates @vineelpratap!" ]
1,686
1,687
1,687
CONTRIBUTOR
null
# What does this PR do? Current MMS documentation is only focused on ASR. Update the doc to show examples for TTS, LID. cc. @patrickvonplaten @sanchit-gandhi
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24311/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24311/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24311", "html_url": "https://github.com/huggingface/transformers/pull/24311", "diff_url": "https://github.com/huggingface/transformers/pull/24311.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24311.patch", "merged_at": 1687182541000 }
https://api.github.com/repos/huggingface/transformers/issues/24310
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24310/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24310/comments
https://api.github.com/repos/huggingface/transformers/issues/24310/events
https://github.com/huggingface/transformers/pull/24310
1,759,509,793
PR_kwDOCUB6oc5TIebP
24,310
Tied weights load
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,687
1,686
COLLABORATOR
null
# What does this PR do? This continue cleaning up a bit the model loading by: 1. Using the new `_tied_weight_keys` class variable when deleting weights without warning for safetensors serialization 2. Fix the logic that deletes tied params in missing keys and add a test (which fails on main) 3. As discussed internally, use a logger.info for the unexepected keys warning when the class used to load the model does not match the class in the config.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24310/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24310/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24310", "html_url": "https://github.com/huggingface/transformers/pull/24310", "diff_url": "https://github.com/huggingface/transformers/pull/24310.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24310.patch", "merged_at": 1686927343000 }
https://api.github.com/repos/huggingface/transformers/issues/24309
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24309/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24309/comments
https://api.github.com/repos/huggingface/transformers/issues/24309/events
https://github.com/huggingface/transformers/issues/24309
1,759,338,302
I_kwDOCUB6oc5o3V8-
24,309
saving model fails with deepspeed
{ "login": "shahules786", "id": 25312635, "node_id": "MDQ6VXNlcjI1MzEyNjM1", "avatar_url": "https://avatars.githubusercontent.com/u/25312635?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shahules786", "html_url": "https://github.com/shahules786", "followers_url": "https://api.github.com/users/shahules786/followers", "following_url": "https://api.github.com/users/shahules786/following{/other_user}", "gists_url": "https://api.github.com/users/shahules786/gists{/gist_id}", "starred_url": "https://api.github.com/users/shahules786/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shahules786/subscriptions", "organizations_url": "https://api.github.com/users/shahules786/orgs", "repos_url": "https://api.github.com/users/shahules786/repos", "events_url": "https://api.github.com/users/shahules786/events{/privacy}", "received_events_url": "https://api.github.com/users/shahules786/received_events", "type": "User", "site_admin": false }
[ { "id": 5616426447, "node_id": "LA_kwDOCUB6oc8AAAABTsPdzw", "url": "https://api.github.com/repos/huggingface/transformers/labels/solved", "name": "solved", "color": "B1D6DC", "default": false, "description": "" } ]
closed
false
null
[]
[ "Run it on 2 or more GPUs and it is working as expected.\r\n<img width=\"1439\" alt=\"Screenshot 2023-06-16 at 7 08 58 AM\" src=\"https://github.com/huggingface/transformers/assets/13534540/d2ed6d49-886a-4798-a493-81d1984b1f39\">\r\n", "So, the issue is that the model isn't getting wrapped in DeepSpeedEngine when run on a single GPU. Running on single GPU makes little sense to me with stage 2 because even with offloading, you don't get any considerable vram savings as the optimizer states and gradients in your case will be tiny as you are using PEFT.\r\n\r\nAs seen above optimizer state is 30MB compared to 11.2GB of Model", "It is working for single GPU for the official example scripts. So, some issue with your codebase.\r\n\r\n```\r\ncd transformers\r\nexport TASK_NAME=mrpc\r\nexport CUDA_VISIBLE_DEVICES=\"0,1\"\r\n\r\ntorchrun --nnodes 1 --nproc-per-node 1 ./examples/pytorch/text-classification/run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 16 --learning_rate 5e-5 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --deepspeed ~/transformers/tests/deepspeed/ds_config_zero2.json --save_steps 10 --evaluation_strategy \"epoch\"\r\n```", "Okay, it is also working with your code but you need to use distributed launcher `torchrun`/`accelerate launch`/`deepspeed` instead of just `python`\r\n\r\ncommand I am running:\r\n```\r\ntorchrun --nproc-per-node 1 funtuner/trainer.py\r\n```", "\r\n<img width=\"1430\" alt=\"Screenshot 2023-06-16 at 7 18 46 AM\" src=\"https://github.com/huggingface/transformers/assets/13534540/91994c8c-4980-4155-97c6-17e05e04aff0\">\r\n", "Marking this as solved. Feel free to close this. Thank you for giving a clear reproducer with correct steps detailed avoiding a lot of back and forth; helping us resolve the issue faster.", "Thanks for your reply @pacman100 . But I'm still facing the same issue even with multiple GPUs. Doesn't Deepspeed automatically use all the available GPUs? I even tried with `torchrun` it gets errors on the same line. ", "No, it doesn't automatically use all the available GPUs", "As seen above, I'm able to save ckpts properly even with a single GPU when launching via torchrun. ", "Hey @pacman100, It was a mistake from my side (disk space was full), but the error didn't show up properly. It works fine now. You're the best :) " ]
1,686
1,686
1,686
NONE
null
### System Info System Info transformers v4.30.0 python 3.8 There is a bug [here](https://github.com/huggingface/transformers/blob/0b7b4429c78de68acaf72224eb6dae43616d820c/src/transformers/trainer.py#LL2257C59-L2257C59), No `PretrainedModel` does not have `save_checkpoint` method. Error trace ``` Traceback (most recent call last): File "funtuner/trainer.py", line 98, in train trainer.train() File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/transformers/trainer.py", line 1540, in train return inner_training_loop( File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/transformers/trainer.py", line 1884, in _inner_training_loop self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/transformers/trainer.py", line 2196, in _maybe_log_save_evaluate self._save_checkpoint(model, trial, metrics=metrics) File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/transformers/trainer.py", line 2257, in _save_checkpoint self.model_wrapped.save_checkpoint(output_dir) File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/peft/peft_model.py", line 289, in __getattr__ return getattr(self.base_model, name) File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/peft/tuners/lora.py", line 206, in __getattr__ return getattr(self.model, name) File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1614, in __getattr__ raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'GPTNeoXForCausalLM' object has no attribute 'save_checkpoint' ``` ### Who can help? @pacman100 ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction My code is [here](https://github.com/explodinggradients/Funtuner/blob/main/funtuner/trainer.py) Run python3 funtuner/trainer.py - export PYTHONPATH="${PYTHONPATH}:/your-path/Funtuner" - please change the log_dir to your folder [here](https://github.com/explodinggradients/Funtuner/blob/c4e66209d5ee276a7eb8caf582435f1eaafbf18f/funtuner/config/config.yaml#L4) also you might want to set log_wandb=False - `dev-train` branch ### Expected behavior Please ensure that model training is running atleast 1000 steps without any errors.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24309/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24309/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24308
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24308/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24308/comments
https://api.github.com/repos/huggingface/transformers/issues/24308/events
https://github.com/huggingface/transformers/issues/24308
1,759,177,733
I_kwDOCUB6oc5o2uwF
24,308
TypeError: cannot pickle 'module' object
{ "login": "pathikg", "id": 55437218, "node_id": "MDQ6VXNlcjU1NDM3MjE4", "avatar_url": "https://avatars.githubusercontent.com/u/55437218?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pathikg", "html_url": "https://github.com/pathikg", "followers_url": "https://api.github.com/users/pathikg/followers", "following_url": "https://api.github.com/users/pathikg/following{/other_user}", "gists_url": "https://api.github.com/users/pathikg/gists{/gist_id}", "starred_url": "https://api.github.com/users/pathikg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pathikg/subscriptions", "organizations_url": "https://api.github.com/users/pathikg/orgs", "repos_url": "https://api.github.com/users/pathikg/repos", "events_url": "https://api.github.com/users/pathikg/events{/privacy}", "received_events_url": "https://api.github.com/users/pathikg/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sgugger ", "It is possible that either of those callbacks (MlFlow or Azure) is inserting something in the state that cannot be serialized with `pickle`. We do not maintain those integrations ourselves, so I suggest you ping the author of the callback making your code fail (after trying to remove one or the other) :-)", "Thanks @sgugger \r\nI commented `MLFlowCallBack()` added by @noise-field in [#8016](https://github.com/huggingface/transformers/pull/8016)\r\nand the code worked fine till 350 steps but I received a new error at the end due to `AzureMLCallback()` added by @davidefiocco in [#8062](https://github.com/huggingface/transformers/pull/8062#issue-729809630)\r\n\r\n```\r\n╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮\r\n│ in <module>:12 │\r\n│ │\r\n│ 9 │ tokenizer=image_processor, │\r\n│ 10 ) │\r\n│ 11 │\r\n│ ❱ 12 trainer.train() │\r\n│ 13 │\r\n│ │\r\n│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/transformers/trainer.py:1645 in train │\r\n│ │\r\n│ 1642 │ │ inner_training_loop = find_executable_batch_size( │\r\n│ 1643 │ │ │ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size │\r\n│ 1644 │ │ ) │\r\n│ ❱ 1645 │ │ return inner_training_loop( │\r\n│ 1646 │ │ │ args=args, │\r\n│ 1647 │ │ │ resume_from_checkpoint=resume_from_checkpoint, │\r\n│ 1648 │ │ │ trial=trial, │\r\n│ │\r\n│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/transformers/trainer.py:2011 in │\r\n│ _inner_training_loop │\r\n│ │\r\n│ 2008 │ │ │ │ │ self.state.epoch = epoch + (step + 1 + steps_skipped) / steps_in_epo │\r\n│ 2009 │ │ │ │ │ self.control = self.callback_handler.on_step_end(args, self.state, s │\r\n│ 2010 │ │ │ │ │ │\r\n│ ❱ 2011 │ │ │ │ │ self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_k │\r\n│ 2012 │ │ │ │ else: │\r\n│ 2013 │ │ │ │ │ self.control = self.callback_handler.on_substep_end(args, self.state │\r\n│ 2014 │\r\n│ │\r\n│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/transformers/trainer.py:2324 in │\r\n│ _maybe_log_save_evaluate │\r\n│ │\r\n│ 2321 │ │ │\r\n│ 2322 │ │ if self.control.should_save: │\r\n│ 2323 │ │ │ self._save_checkpoint(model, trial, metrics=metrics) │\r\n│ ❱ 2324 │ │ │ self.control = self.callback_handler.on_save(self.args, self.state, self.con │\r\n│ 2325 │ │\r\n│ 2326 │ def _load_rng_state(self, checkpoint): │\r\n│ 2327 │ │ # Load RNG states from `checkpoint` │\r\n│ │\r\n│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/transformers/trainer_callback.py:386 in │\r\n│ on_save │\r\n│ │\r\n│ 383 │ │\r\n│ 384 │ def on_save(self, args: TrainingArguments, state: TrainerState, control: TrainerCont │\r\n│ 385 │ │ control.should_save = False │\r\n│ ❱ 386 │ │ return self.call_event(\"on_save\", args, state, control) │\r\n│ 387 │ │\r\n│ 388 │ def on_log(self, args: TrainingArguments, state: TrainerState, control: TrainerContr │\r\n│ 389 │ │ control.should_log = False │\r\n│ │\r\n│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/transformers/trainer_callback.py:397 in │\r\n│ call_event │\r\n│ │\r\n│ 394 │ │\r\n│ 395 │ def call_event(self, event, args, state, control, **kwargs): │\r\n│ 396 │ │ for callback in self.callbacks: │\r\n│ ❱ 397 │ │ │ result = getattr(callback, event)( │\r\n│ 398 │ │ │ │ args, │\r\n│ 399 │ │ │ │ state, │\r\n│ 400 │ │ │ │ control, │\r\n│ │\r\n│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/transformers/integrations.py:1055 in │\r\n│ on_save │\r\n│ │\r\n│ 1052 │ │ │ ckpt_dir = f\"checkpoint-{state.global_step}\" │\r\n│ 1053 │ │ │ artifact_path = os.path.join(args.output_dir, ckpt_dir) │\r\n│ 1054 │ │ │ logger.info(f\"Logging checkpoint artifacts in {ckpt_dir}. This may take time │\r\n│ ❱ 1055 │ │ │ self._ml_flow.pyfunc.log_model( │\r\n│ 1056 │ │ │ │ ckpt_dir, │\r\n│ 1057 │ │ │ │ artifacts={\"model_path\": artifact_path}, │\r\n│ 1058 │ │ │ │ python_model=self._ml_flow.pyfunc.PythonModel(), │\r\n│ │\r\n│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/mlflow/pyfunc/__init__.py:1578 in │\r\n│ log_model │\r\n│ │\r\n│ 1575 │ :return: A :py:class:`ModelInfo <mlflow.models.model.ModelInfo>` instance that conta │\r\n│ 1576 │ │ │ metadata of the logged model. │\r\n│ 1577 │ \"\"\" │\r\n│ ❱ 1578 │ return Model.log( │\r\n│ 1579 │ │ artifact_path=artifact_path, │\r\n│ 1580 │ │ flavor=mlflow.pyfunc, │\r\n│ 1581 │ │ loader_module=loader_module, │\r\n│ │\r\n│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/mlflow/models/model.py:487 in log │\r\n│ │\r\n│ 484 │ │ │ run_id = mlflow.tracking.fluent._get_or_start_run().info.run_id │\r\n│ 485 │ │ │ mlflow_model = cls(artifact_path=artifact_path, run_id=run_id, metadata=meta │\r\n│ 486 │ │ │ flavor.save_model(path=local_path, mlflow_model=mlflow_model, **kwargs) │\r\n│ ❱ 487 │ │ │ mlflow.tracking.fluent.log_artifacts(local_path, mlflow_model.artifact_path) │\r\n│ 488 │ │ │ try: │\r\n│ 489 │ │ │ │ mlflow.tracking.fluent._record_logged_model(mlflow_model) │\r\n│ 490 │ │ │ except MlflowException: │\r\n│ │\r\n│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/mlflow/tracking/fluent.py:810 in │\r\n│ log_artifacts │\r\n│ │\r\n│ 807 │ │ │ mlflow.log_artifacts(\"data\", artifact_path=\"states\") │\r\n│ 808 │ \"\"\" │\r\n│ 809 │ run_id = _get_or_start_run().info.run_id │\r\n│ ❱ 810 │ MlflowClient().log_artifacts(run_id, local_dir, artifact_path) │\r\n│ 811 │\r\n│ 812 │\r\n│ 813 def log_text(text: str, artifact_file: str) -> None: │\r\n│ │\r\n│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/mlflow/tracking/client.py:1048 in │\r\n│ log_artifacts │\r\n│ │\r\n│ 1045 │ │ │ artifact: states │\r\n│ 1046 │ │ │ is_dir: True │\r\n│ 1047 │ │ \"\"\" │\r\n│ ❱ 1048 │ │ self._tracking_client.log_artifacts(run_id, local_dir, artifact_path) │\r\n│ 1049 │ │\r\n│ 1050 │ @contextlib.contextmanager │\r\n│ 1051 │ def _log_artifact_helper(self, run_id, artifact_file): │\r\n│ │\r\n│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/mlflow/tracking/_tracking_service/client │\r\n│ .py:448 in log_artifacts │\r\n│ │\r\n│ 445 │ │ :param local_dir: Path to the directory of files to write. │\r\n│ 446 │ │ :param artifact_path: If provided, the directory in ``artifact_uri`` to write to │\r\n│ 447 │ │ \"\"\" │\r\n│ ❱ 448 │ │ self._get_artifact_repo(run_id).log_artifacts(local_dir, artifact_path) │\r\n│ 449 │ │\r\n│ 450 │ def list_artifacts(self, run_id, path=None): │\r\n│ 451 │ │ \"\"\" │\r\n│ │\r\n│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azureml/mlflow/_store/artifact/artifact_ │\r\n│ repo.py:88 in log_artifacts │\r\n│ │\r\n│ 85 │ │ if artifact_path is None: │\r\n│ 86 │ │ │ dest_path = \"\" │\r\n│ 87 │ │ │\r\n│ ❱ 88 │ │ self.artifacts.upload_dir(local_dir, dest_path) │\r\n│ 89 │ │\r\n│ 90 │ def list_artifacts(self, path): │\r\n│ 91 │ │ \"\"\" │\r\n│ │\r\n│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azureml/mlflow/_client/artifact/run_arti │\r\n│ fact_client.py:90 in upload_dir │\r\n│ │\r\n│ 87 │ │ │ │ local_paths.append(local_file_path) │\r\n│ 88 │ │ │\r\n│ 89 │ │ # Make batch request to create empty artifacts │\r\n│ ❱ 90 │ │ empty_artifact_res = self._create_empty_artifacts(paths=remote_paths, run_id=sel │\r\n│ 91 │ │ │\r\n│ 92 │ │ result = self._upload_files( │\r\n│ 93 │ │ │ local_paths=local_paths, remote_paths=remote_paths, empty_artifact_content=e │\r\n│ │\r\n│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azureml/mlflow/_client/artifact/run_arti │\r\n│ fact_client.py:146 in _create_empty_artifacts │\r\n│ │\r\n│ 143 │ │ │\r\n│ 144 │ │ artifacts = [ArtifactPath(path=path) for path in paths] │\r\n│ 145 │ │ │\r\n│ ❱ 146 │ │ response = self._client.run_artifacts.batch_create_empty_artifacts( │\r\n│ 147 │ │ │ subscription_id=self._service_context.subscription_id, │\r\n│ 148 │ │ │ resource_group_name=self._service_context.resource_group_name, │\r\n│ 149 │ │ │ workspace_name=self._service_context.workspace_name, │\r\n│ │\r\n│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azureml/mlflow/_restclient/run_artifact/ │\r\n│ operations/_run_artifacts_operations.py:1116 in batch_create_empty_artifacts │\r\n│ │\r\n│ 1113 │ │ │ body_content = None │\r\n│ 1114 │ │ body_content_kwargs['content'] = body_content │\r\n│ 1115 │ │ request = self._client.post(url, query_parameters, header_parameters, **body_con │\r\n│ ❱ 1116 │ │ pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs) │\r\n│ 1117 │ │ response = pipeline_response.http_response │\r\n│ 1118 │ │ │\r\n│ 1119 │ │ if response.status_code not in [200]: │\r\n│ │\r\n│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/core/pipeline/_base.py:205 in run │\r\n│ │\r\n│ 202 │ │ │ if self._impl_policies │\r\n│ 203 │ │ │ else _TransportRunner(self._transport) │\r\n│ 204 │ │ ) │\r\n│ ❱ 205 │ │ return first_node.send(pipeline_request) # type: ignore │\r\n│ 206 │\r\n│ │\r\n│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/core/pipeline/_base.py:69 in send │\r\n│ │\r\n│ 66 │ │ \"\"\" │\r\n│ 67 │ │ _await_result(self._policy.on_request, request) │\r\n│ 68 │ │ try: │\r\n│ ❱ 69 │ │ │ response = self.next.send(request) │\r\n│ 70 │ │ except Exception: # pylint: disable=broad-except │\r\n│ 71 │ │ │ _await_result(self._policy.on_exception, request) │\r\n│ 72 │ │ │ raise │\r\n│ │\r\n│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/core/pipeline/_base.py:69 in send │\r\n│ │\r\n│ 66 │ │ \"\"\" │\r\n│ 67 │ │ _await_result(self._policy.on_request, request) │\r\n│ 68 │ │ try: │\r\n│ ❱ 69 │ │ │ response = self.next.send(request) │\r\n│ 70 │ │ except Exception: # pylint: disable=broad-except │\r\n│ 71 │ │ │ _await_result(self._policy.on_exception, request) │\r\n│ 72 │ │ │ raise │\r\n│ │\r\n│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/core/pipeline/_base.py:69 in send │\r\n│ │\r\n│ 66 │ │ \"\"\" │\r\n│ 67 │ │ _await_result(self._policy.on_request, request) │\r\n│ 68 │ │ try: │\r\n│ ❱ 69 │ │ │ response = self.next.send(request) │\r\n│ 70 │ │ except Exception: # pylint: disable=broad-except │\r\n│ 71 │ │ │ _await_result(self._policy.on_exception, request) │\r\n│ 72 │ │ │ raise │\r\n│ │\r\n│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/core/pipeline/_base.py:69 in send │\r\n│ │\r\n│ 66 │ │ \"\"\" │\r\n│ 67 │ │ _await_result(self._policy.on_request, request) │\r\n│ 68 │ │ try: │\r\n│ ❱ 69 │ │ │ response = self.next.send(request) │\r\n│ 70 │ │ except Exception: # pylint: disable=broad-except │\r\n│ 71 │ │ │ _await_result(self._policy.on_exception, request) │\r\n│ 72 │ │ │ raise │\r\n│ │\r\n│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/core/pipeline/_base.py:69 in send │\r\n│ │\r\n│ 66 │ │ \"\"\" │\r\n│ 67 │ │ _await_result(self._policy.on_request, request) │\r\n│ 68 │ │ try: │\r\n│ ❱ 69 │ │ │ response = self.next.send(request) │\r\n│ 70 │ │ except Exception: # pylint: disable=broad-except │\r\n│ 71 │ │ │ _await_result(self._policy.on_exception, request) │\r\n│ 72 │ │ │ raise │\r\n│ │\r\n│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/mgmt/core/policies/_base.py:47 in │\r\n│ send │\r\n│ │\r\n│ 44 │ def send(self, request): │\r\n│ 45 │ │ # type: (PipelineRequest[HTTPRequestType], Any) -> PipelineResponse[HTTPRequestT │\r\n│ 46 │ │ http_request = request.http_request │\r\n│ ❱ 47 │ │ response = self.next.send(request) │\r\n│ 48 │ │ if response.http_response.status_code == 409: │\r\n│ 49 │ │ │ rp_name = self._check_rp_not_registered_err(response) │\r\n│ 50 │ │ │ if rp_name: │\r\n│ │\r\n│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/core/pipeline/policies/_redirect.p │\r\n│ y:160 in send │\r\n│ │\r\n│ 157 │ │ retryable = True │\r\n│ 158 │ │ redirect_settings = self.configure_redirects(request.context.options) │\r\n│ 159 │ │ while retryable: │\r\n│ ❱ 160 │ │ │ response = self.next.send(request) │\r\n│ 161 │ │ │ redirect_location = self.get_redirect_location(response) │\r\n│ 162 │ │ │ if redirect_location and redirect_settings[\"allow\"]: │\r\n│ 163 │ │ │ │ retryable = self.increment( │\r\n│ │\r\n│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/core/pipeline/policies/_retry.py:5 │\r\n│ 02 in send │\r\n│ │\r\n│ 499 │ │ │ │ │ │ else: │\r\n│ 500 │ │ │ │ │ │ │ is_response_error = True │\r\n│ 501 │ │ │ │ │ │ continue │\r\n│ ❱ 502 │ │ │ │ raise err │\r\n│ 503 │ │ │ finally: │\r\n│ 504 │ │ │ │ end_time = time.time() │\r\n│ 505 │ │ │ │ if absolute_timeout: │\r\n│ │\r\n│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/core/pipeline/policies/_retry.py:4 │\r\n│ 74 in send │\r\n│ │\r\n│ 471 │ │ │ try: │\r\n│ 472 │ │ │ │ start_time = time.time() │\r\n│ 473 │ │ │ │ self._configure_timeout(request, absolute_timeout, is_response_error) │\r\n│ ❱ 474 │ │ │ │ response = self.next.send(request) │\r\n│ 475 │ │ │ │ if self.is_retry(retry_settings, response): │\r\n│ 476 │ │ │ │ │ retry_active = self.increment(retry_settings, response=response) │\r\n│ 477 │ │ │ │ │ if retry_active: │\r\n│ │\r\n│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/core/pipeline/policies/_authentica │\r\n│ tion.py:117 in send │\r\n│ │\r\n│ 114 │ │ \"\"\" │\r\n│ 115 │ │ self.on_request(request) │\r\n│ 116 │ │ try: │\r\n│ ❱ 117 │ │ │ response = self.next.send(request) │\r\n│ 118 │ │ │ self.on_response(request, response) │\r\n│ 119 │ │ except Exception: # pylint:disable=broad-except │\r\n│ 120 │ │ │ self.on_exception(request) │\r\n│ │\r\n│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/core/pipeline/_base.py:69 in send │\r\n│ │\r\n│ 66 │ │ \"\"\" │\r\n│ 67 │ │ _await_result(self._policy.on_request, request) │\r\n│ 68 │ │ try: │\r\n│ ❱ 69 │ │ │ response = self.next.send(request) │\r\n│ 70 │ │ except Exception: # pylint: disable=broad-except │\r\n│ 71 │ │ │ _await_result(self._policy.on_exception, request) │\r\n│ 72 │ │ │ raise │\r\n│ │\r\n│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/core/pipeline/_base.py:69 in send │\r\n│ │\r\n│ 66 │ │ \"\"\" │\r\n│ 67 │ │ _await_result(self._policy.on_request, request) │\r\n│ 68 │ │ try: │\r\n│ ❱ 69 │ │ │ response = self.next.send(request) │\r\n│ 70 │ │ except Exception: # pylint: disable=broad-except │\r\n│ 71 │ │ │ _await_result(self._policy.on_exception, request) │\r\n│ 72 │ │ │ raise │\r\n│ │\r\n│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/core/pipeline/_base.py:69 in send │\r\n│ │\r\n│ 66 │ │ \"\"\" │\r\n│ 67 │ │ _await_result(self._policy.on_request, request) │\r\n│ 68 │ │ try: │\r\n│ ❱ 69 │ │ │ response = self.next.send(request) │\r\n│ 70 │ │ except Exception: # pylint: disable=broad-except │\r\n│ 71 │ │ │ _await_result(self._policy.on_exception, request) │\r\n│ 72 │ │ │ raise │\r\n│ │\r\n│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/core/pipeline/_base.py:69 in send │\r\n│ │\r\n│ 66 │ │ \"\"\" │\r\n│ 67 │ │ _await_result(self._policy.on_request, request) │\r\n│ 68 │ │ try: │\r\n│ ❱ 69 │ │ │ response = self.next.send(request) │\r\n│ 70 │ │ except Exception: # pylint: disable=broad-except │\r\n│ 71 │ │ │ _await_result(self._policy.on_exception, request) │\r\n│ 72 │ │ │ raise │\r\n│ │\r\n│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/core/pipeline/_base.py:100 in send │\r\n│ │\r\n│ 97 │ │ \"\"\" │\r\n│ 98 │ │ return PipelineResponse( │\r\n│ 99 │ │ │ request.http_request, │\r\n│ ❱ 100 │ │ │ self._sender.send(request.http_request, **request.context.options), │\r\n│ 101 │ │ │ context=request.context, │\r\n│ 102 │ │ ) │\r\n│ 103 │\r\n│ │\r\n│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/core/pipeline/transport/_requests_ │\r\n│ basic.py:376 in send │\r\n│ │\r\n│ 373 │ │ │ error = ServiceRequestError(err, error=err) │\r\n│ 374 │ │ │\r\n│ 375 │ │ if error: │\r\n│ ❱ 376 │ │ │ raise error │\r\n│ 377 │ │ if _is_rest(request): │\r\n│ 378 │ │ │ from azure.core.rest._requests_basic import RestRequestsTransportResponse │\r\n│ 379 │\r\n╰──────────────────────────────────────────────────────────────────────────────────────────────────╯\r\nServiceResponseError: HTTPSConnectionPool(host='centralindia.api.azureml.ms', port=443): Read timed out. (read \r\ntimeout=300)\r\n```\r\n\r\nI think this has something to do with the azureml not registering the request in given time. In past, I had faced a similar issue in yolov5 while logging these artifacts to azureml I used to get this similar error so I had added another a retry loop with some wait time and it worked\r\nSo please let me know if you found any resolution for the same @davidefiocco :)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,686
1,690
1,690
NONE
null
### System Info transformers: 4.30.0 platform: Ubuntu 20.04.6 LTS x86_64 python: 3.8.5 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Hello team, I was following this [tutorial](https://huggingface.co/docs/transformers/tasks/object_detection) on huggingface for object detection using DETR model on my custom dataset that has same dataset structure as `cpp5` (one used in tutorial) I've used slightly different training arguments: ``` from transformers import TrainingArguments from transformers.integrations import MLflowCallback, AzureMLCallback training_args = TrainingArguments( output_dir="detr-resnet-50_finetuned_loss-run", per_device_train_batch_size=4, num_train_epochs=30, fp16=True, save_steps=200, logging_steps=50, learning_rate=1e-5, weight_decay=1e-4, save_total_limit=2, remove_unused_columns=False, push_to_hub=False, # dataloader_num_workers=4, # Adjust the number of dataloader workers according to your system logging_dir="logs", report_to="mlflow", # Report metrics to MLflow # load_best_model_at_end=True, metric_for_best_model="loss", greater_is_better=False, ) # Create the MLflow callback mlflow_callback = MLflowCallback() azureml_callback = AzureMLCallback() # Integrate the MLflow callback in TrainingArguments training_args.callbacks = [mlflow_callback, azureml_callback] from transformers import Trainer trainer = Trainer( model=model, args=training_args, data_collator=collate_fn, train_dataset=dataset["train"], eval_dataset=dataset["valid"], tokenizer=image_processor, ) trainer.train() ``` but I am receiving at 200th step ``` ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ in <module>:12 │ │ │ │ 9 │ tokenizer=image_processor, │ │ 10 ) │ │ 11 │ │ ❱ 12 trainer.train() │ │ 13 │ │ │ │ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/transformers/trainer.py:1645 in train │ │ │ │ 1642 │ │ inner_training_loop = find_executable_batch_size( │ │ 1643 │ │ │ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size │ │ 1644 │ │ ) │ │ ❱ 1645 │ │ return inner_training_loop( │ │ 1646 │ │ │ args=args, │ │ 1647 │ │ │ resume_from_checkpoint=resume_from_checkpoint, │ │ 1648 │ │ │ trial=trial, │ │ │ │ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/transformers/trainer.py:2011 in │ │ _inner_training_loop │ │ │ │ 2008 │ │ │ │ │ self.state.epoch = epoch + (step + 1 + steps_skipped) / steps_in_epo │ │ 2009 │ │ │ │ │ self.control = self.callback_handler.on_step_end(args, self.state, s │ │ 2010 │ │ │ │ │ │ │ ❱ 2011 │ │ │ │ │ self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_k │ │ 2012 │ │ │ │ else: │ │ 2013 │ │ │ │ │ self.control = self.callback_handler.on_substep_end(args, self.state │ │ 2014 │ │ │ │ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/transformers/trainer.py:2323 in │ │ _maybe_log_save_evaluate │ │ │ │ 2320 │ │ │ │ self.lr_scheduler.step(metrics[metric_to_check]) │ │ 2321 │ │ │ │ 2322 │ │ if self.control.should_save: │ │ ❱ 2323 │ │ │ self._save_checkpoint(model, trial, metrics=metrics) │ │ 2324 │ │ │ self.control = self.callback_handler.on_save(self.args, self.state, self.con │ │ 2325 │ │ │ 2326 │ def _load_rng_state(self, checkpoint): │ │ │ │ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/transformers/trainer.py:2380 in │ │ _save_checkpoint │ │ │ │ 2377 │ │ │ │ 2378 │ │ run_dir = self._get_output_dir(trial=trial) │ │ 2379 │ │ output_dir = os.path.join(run_dir, checkpoint_folder) │ │ ❱ 2380 │ │ self.save_model(output_dir, _internal_call=True) │ │ 2381 │ │ if self.is_deepspeed_enabled: │ │ 2382 │ │ │ # under zero3 model file itself doesn't get saved since it's bogus! Unless d │ │ 2383 │ │ │ # config `stage3_gather_16bit_weights_on_model_save` is True │ │ │ │ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/transformers/trainer.py:2878 in │ │ save_model │ │ │ │ 2875 │ │ │ │ │ self.model_wrapped.save_checkpoint(output_dir) │ │ 2876 │ │ │ │ 2877 │ │ elif self.args.should_save: │ │ ❱ 2878 │ │ │ self._save(output_dir) │ │ 2879 │ │ │ │ 2880 │ │ # Push to the Hub when `save_model` is called by the user. │ │ 2881 │ │ if self.args.push_to_hub and not _internal_call: │ │ │ │ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/transformers/trainer.py:2944 in _save │ │ │ │ 2941 │ │ │ self.tokenizer.save_pretrained(output_dir) │ │ 2942 │ │ │ │ 2943 │ │ # Good practice: save your training arguments together with the trained model │ │ ❱ 2944 │ │ torch.save(self.args, os.path.join(output_dir, TRAINING_ARGS_NAME)) │ │ 2945 │ │ │ 2946 │ def store_flos(self): │ │ 2947 │ │ # Storing the number of floating-point operations that went into the model │ │ │ │ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/torch/serialization.py:441 in save │ │ │ │ 438 │ │ │ 439 │ if _use_new_zipfile_serialization: │ │ 440 │ │ with _open_zipfile_writer(f) as opened_zipfile: │ │ ❱ 441 │ │ │ _save(obj, opened_zipfile, pickle_module, pickle_protocol) │ │ 442 │ │ │ return │ │ 443 │ else: │ │ 444 │ │ with _open_file_like(f, 'wb') as opened_file: │ │ │ │ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/torch/serialization.py:653 in _save │ │ │ │ 650 │ data_buf = io.BytesIO() │ │ 651 │ pickler = pickle_module.Pickler(data_buf, protocol=pickle_protocol) │ │ 652 │ pickler.persistent_id = persistent_id │ │ ❱ 653 │ pickler.dump(obj) │ │ 654 │ data_value = data_buf.getvalue() │ │ 655 │ zip_file.write_record('data.pkl', data_value, len(data_value)) │ │ 656 │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ TypeError: cannot pickle 'module' object ``` ### Expected behavior Artifact should've been saved at 200th step
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24308/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24308/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24307
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24307/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24307/comments
https://api.github.com/repos/huggingface/transformers/issues/24307/events
https://github.com/huggingface/transformers/pull/24307
1,759,156,623
PR_kwDOCUB6oc5THSk6
24,307
Update test versions on README.md
{ "login": "sqali", "id": 66676360, "node_id": "MDQ6VXNlcjY2Njc2MzYw", "avatar_url": "https://avatars.githubusercontent.com/u/66676360?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sqali", "html_url": "https://github.com/sqali", "followers_url": "https://api.github.com/users/sqali/followers", "following_url": "https://api.github.com/users/sqali/following{/other_user}", "gists_url": "https://api.github.com/users/sqali/gists{/gist_id}", "starred_url": "https://api.github.com/users/sqali/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sqali/subscriptions", "organizations_url": "https://api.github.com/users/sqali/orgs", "repos_url": "https://api.github.com/users/sqali/repos", "events_url": "https://api.github.com/users/sqali/events{/privacy}", "received_events_url": "https://api.github.com/users/sqali/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks, @sgugger. This is my first successful contribution to an open-source project." ]
1,686
1,687
1,686
CONTRIBUTOR
null
# What does this PR do? Hi @sgugger @amyeroberts , I have raised this PR is raised to improve the docs by updating the test versions mentioned in the README.md file. I have referred to the setup.py file to update it. It is related to issue #24263. I have followed the recommended documentation format as advised. Kindly check and advise. Fix #24263
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24307/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24307/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24307", "html_url": "https://github.com/huggingface/transformers/pull/24307", "diff_url": "https://github.com/huggingface/transformers/pull/24307.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24307.patch", "merged_at": 1686848472000 }
https://api.github.com/repos/huggingface/transformers/issues/24306
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24306/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24306/comments
https://api.github.com/repos/huggingface/transformers/issues/24306/events
https://github.com/huggingface/transformers/pull/24306
1,759,002,041
PR_kwDOCUB6oc5TGxyb
24,306
Explicit arguments in `from_pretrained`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "TODO:\r\n - for TF/Flax model `from_pretrained`\r\n - for tokenizer/processors\r\n - for auto", "@sgugger Would be nice if you can take a quick look 🙏 . And do you want me to deal with all framework (TF/Flax), tokenizer/processor, and also `auto` in this PR, or I am allowed to separate them ..?", "You have a lot of tests failing to fix 😅 , sure you want a review yet?", "@sgugger No, I didn't request a new review since last time you have a look. But the changes pushed triggered you 😆 " ]
1,686
1,687
1,687
COLLABORATOR
null
# What does this PR do? [still incomplete] Need to apply the same changes to other files containing `from_pretrained` (other framework, other classes like config, processor, auto, etc.) but @sgugger let me know if I am not lost already in the early stage.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24306/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24306/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24306", "html_url": "https://github.com/huggingface/transformers/pull/24306", "diff_url": "https://github.com/huggingface/transformers/pull/24306.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24306.patch", "merged_at": 1687368252000 }
https://api.github.com/repos/huggingface/transformers/issues/24305
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24305/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24305/comments
https://api.github.com/repos/huggingface/transformers/issues/24305/events
https://github.com/huggingface/transformers/pull/24305
1,758,842,619
PR_kwDOCUB6oc5TGPaF
24,305
[AutoModel] Add AutoModelForTextEncoding
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,687
1,687
CONTRIBUTOR
null
# What does this PR do? Adds AutoModel for text encoding (used in the circumstance when you want to extract the text encoder from an encoder-decoder architecture). This facilitates loading a t5 encoder from t5 enc-dec model weights (as is done in Music Gen in #24109)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24305/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24305/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24305", "html_url": "https://github.com/huggingface/transformers/pull/24305", "diff_url": "https://github.com/huggingface/transformers/pull/24305.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24305.patch", "merged_at": 1687510897000 }
https://api.github.com/repos/huggingface/transformers/issues/24304
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24304/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24304/comments
https://api.github.com/repos/huggingface/transformers/issues/24304/events
https://github.com/huggingface/transformers/issues/24304
1,758,840,352
I_kwDOCUB6oc5o1cYg
24,304
SpikeGPT
{ "login": "thistleknot", "id": 5154106, "node_id": "MDQ6VXNlcjUxNTQxMDY=", "avatar_url": "https://avatars.githubusercontent.com/u/5154106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thistleknot", "html_url": "https://github.com/thistleknot", "followers_url": "https://api.github.com/users/thistleknot/followers", "following_url": "https://api.github.com/users/thistleknot/following{/other_user}", "gists_url": "https://api.github.com/users/thistleknot/gists{/gist_id}", "starred_url": "https://api.github.com/users/thistleknot/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thistleknot/subscriptions", "organizations_url": "https://api.github.com/users/thistleknot/orgs", "repos_url": "https://api.github.com/users/thistleknot/repos", "events_url": "https://api.github.com/users/thistleknot/events{/privacy}", "received_events_url": "https://api.github.com/users/thistleknot/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "Hi @thistleknot, thanks for opening this feature request. \r\n\r\nJust skimming the repo, my understanding is that SpikeGPT already has a set of pretrained weights available. \r\n\r\nIf you (or someone else) would like to make this model available through the transformers API, the easiest and fastest way is to add it directly on the hub - here's a guide: https://huggingface.co/docs/transformers/custom_models.", "They have a 200m model on the repo. Maybe I'm mistaken and there is nothing\r\nthat needs to be done. Wasn't sure if it's integrated in the eco system but\r\nI'll double back and check\r\n\r\nOn Thu, Jun 15, 2023, 11:55 AM amyeroberts ***@***.***> wrote:\r\n\r\n> Hi @thistleknot <https://github.com/thistleknot>, thanks for opening this\r\n> feature request.\r\n>\r\n> Just skimming the repo, my understanding is that SpikeGPT already has a\r\n> set of pretrained weights available.\r\n>\r\n> If you (or someone else) would like to make this model available through\r\n> the transformers API, the easiest and fastest way is to add it directly on\r\n> the hub - here's a guide:\r\n> https://huggingface.co/docs/transformers/custom_models.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/24304#issuecomment-1593571452>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/ABHKKOQDGGW32HFT2WGA2A3XLNLBNANCNFSM6AAAAAAZH3LM3A>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n", "The some weights have already been uploaded on to the hub: \r\n* https://huggingface.co/ridger/SpikeGPT-OpenWebText-216M\r\n* https://huggingface.co/ridger/SpikeGPT-BookCorpus\r\n\r\nHowever, to be able to use them with the transformers API e.g. `AutoModel.from_pretrained(checkpoint)`, then a modeling file would also need to be created and added to the hub e.g. like [this one for falcon](https://huggingface.co/tiiuae/falcon-7b/blob/main/modelling_RW.py). ", "Hi! If there is no API yet for this model may I work on it? \r\nIf yes, is there a timeline for how soon one has to ship it, making it available through `transformers` API? ", "This model is available online without need for an api\r\n\r\nOn Mon, Jul 24, 2023, 12:18 PM Abhipsha Das ***@***.***>\r\nwrote:\r\n\r\n> Hi! If there is no API yet for this model may I work on it?\r\n> If yes, is there a timeline for how soon one has to ship it, making it\r\n> available through transformers API?\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/24304#issuecomment-1648477014>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/ABHKKOXNLBHMUCP2GQTUK3DXR3DA7ANCNFSM6AAAAAAZH3LM3A>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n" ]
1,686
1,690
null
NONE
null
### Feature request Extract the spiking nature of the LLM and port that [set] of features over for training/inference,. https://github.com/ridgerchu/SpikeGPT ### Motivation the benefits would result in more efficient computational costs (x22 reduction). ### Your contribution I am willing to test, trace down bugs, and push. I'm still new in the world of llm backend coding.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24304/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24304/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/24303
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24303/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24303/comments
https://api.github.com/repos/huggingface/transformers/issues/24303/events
https://github.com/huggingface/transformers/issues/24303
1,758,751,755
I_kwDOCUB6oc5o1GwL
24,303
RuntimeError: You must initialize the accelerate state by calling either `PartialState()` or `Accelerator()` before using the logging utility.
{ "login": "amarv3142", "id": 23461332, "node_id": "MDQ6VXNlcjIzNDYxMzMy", "avatar_url": "https://avatars.githubusercontent.com/u/23461332?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amarv3142", "html_url": "https://github.com/amarv3142", "followers_url": "https://api.github.com/users/amarv3142/followers", "following_url": "https://api.github.com/users/amarv3142/following{/other_user}", "gists_url": "https://api.github.com/users/amarv3142/gists{/gist_id}", "starred_url": "https://api.github.com/users/amarv3142/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amarv3142/subscriptions", "organizations_url": "https://api.github.com/users/amarv3142/orgs", "repos_url": "https://api.github.com/users/amarv3142/repos", "events_url": "https://api.github.com/users/amarv3142/events{/privacy}", "received_events_url": "https://api.github.com/users/amarv3142/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[ { "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false } ]
[ "Could you share your accelerate version ? Might be outdated: https://github.com/huggingface/accelerate/issues/835", "> Could you share your accelerate version ? Might be outdated: [huggingface/accelerate#835](https://github.com/huggingface/accelerate/issues/835)\r\n\r\n0.20.3", "@sgugger maybe for input ? Seems like `accelerate` issue but I cannot find anything relevant.", "Could you provide us with the whole traceback? Also cc @muellerzr ", "A full traceback is definitely needed here to know what logger is being init'd wrong ", "> Could you provide us with the whole traceback? Also cc @muellerzr\r\n\r\n```\r\n in <cell line: 2>:2 │\r\n│ │\r\n│ /usr/local/lib/python3.10/dist-packages/transformers/pipelines/text_generation.py:201 in │\r\n│ __call__ │\r\n│ │\r\n│ 198 │ │ │ - **generated_token_ids** (`torch.Tensor` or `tf.Tensor`, present when `retu │\r\n│ 199 │ │ │ ids of the generated text. │\r\n│ 200 │ │ \"\"\" │\r\n│ ❱ 201 │ │ return super().__call__(text_inputs, **kwargs) │\r\n│ 202 │ │\r\n│ 203 │ def preprocess(self, prompt_text, prefix=\"\", handle_long_generation=None, **generate │\r\n│ 204 │ │ inputs = self.tokenizer( │\r\n│ │\r\n│ /usr/local/lib/python3.10/dist-packages/transformers/pipelines/base.py:1120 in __call__ │\r\n│ │\r\n│ 1117 │ │ │ │ ) │\r\n│ 1118 │ │ │ ) │\r\n│ 1119 │ │ else: │\r\n│ ❱ 1120 │ │ │ return self.run_single(inputs, preprocess_params, forward_params, postproces │\r\n│ 1121 │ │\r\n│ 1122 │ def run_multi(self, inputs, preprocess_params, forward_params, postprocess_params): │\r\n│ 1123 │ │ return [self.run_single(item, preprocess_params, forward_params, postprocess_par │\r\n│ │\r\n│ /usr/local/lib/python3.10/dist-packages/transformers/pipelines/base.py:1127 in run_single │\r\n│ │\r\n│ 1124 │ │\r\n│ 1125 │ def run_single(self, inputs, preprocess_params, forward_params, postprocess_params): │\r\n│ 1126 │ │ model_inputs = self.preprocess(inputs, **preprocess_params) │\r\n│ ❱ 1127 │ │ model_outputs = self.forward(model_inputs, **forward_params) │\r\n│ 1128 │ │ outputs = self.postprocess(model_outputs, **postprocess_params) │\r\n│ 1129 │ │ return outputs │\r\n│ 1130 │\r\n│ │\r\n│ /usr/local/lib/python3.10/dist-packages/transformers/pipelines/base.py:1026 in forward │\r\n│ │\r\n│ 1023 │ │ │ │ inference_context = self.get_inference_context() │\r\n│ 1024 │ │ │ │ with inference_context(): │\r\n│ 1025 │ │ │ │ │ model_inputs = self._ensure_tensor_on_device(model_inputs, device=se │\r\n│ ❱ 1026 │ │ │ │ │ model_outputs = self._forward(model_inputs, **forward_params) │\r\n│ 1027 │ │ │ │ │ model_outputs = self._ensure_tensor_on_device(model_outputs, device= │\r\n│ 1028 │ │ │ else: │\r\n│ 1029 │ │ │ │ raise ValueError(f\"Framework {self.framework} is not supported\") │\r\n│ │\r\n│ /usr/local/lib/python3.10/dist-packages/transformers/pipelines/text_generation.py:263 in │\r\n│ _forward │\r\n│ │\r\n│ 260 │ │ │ │ generate_kwargs[\"min_length\"] += prefix_length │\r\n│ 261 │ │ │\r\n│ 262 │ │ # BS x SL │\r\n│ ❱ 263 │ │ generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=att │\r\n│ 264 │ │ out_b = generated_sequence.shape[0] │\r\n│ 265 │ │ if self.framework == \"pt\": │\r\n│ 266 │ │ │ generated_sequence = generated_sequence.reshape(in_b, out_b // in_b, *genera │\r\n│ │\r\n│ /usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py:115 in decorate_context │\r\n│ │\r\n│ 112 │ @functools.wraps(func) │\r\n│ 113 │ def decorate_context(*args, **kwargs): │\r\n│ 114 │ │ with ctx_factory(): │\r\n│ ❱ 115 │ │ │ return func(*args, **kwargs) │\r\n│ 116 │ │\r\n│ 117 │ return decorate_context │\r\n│ 118 │\r\n│ │\r\n│ /usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1572 in generate │\r\n│ │\r\n│ 1569 │ │ │ ) │\r\n│ 1570 │ │ │ │\r\n│ 1571 │ │ │ # 13. run sample │\r\n│ ❱ 1572 │ │ │ return self.sample( │\r\n│ 1573 │ │ │ │ input_ids, │\r\n│ 1574 │ │ │ │ logits_processor=logits_processor, │\r\n│ 1575 │ │ │ │ logits_warper=logits_warper, │\r\n│ │\r\n│ /usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:2619 in sample │\r\n│ │\r\n│ 2616 │ │ │ model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) │\r\n│ 2617 │ │ │ │\r\n│ 2618 │ │ │ # forward pass to get next token │\r\n│ ❱ 2619 │ │ │ outputs = self( │\r\n│ 2620 │ │ │ │ **model_inputs, │\r\n│ 2621 │ │ │ │ return_dict=True, │\r\n│ 2622 │ │ │ │ output_attentions=output_attentions, │\r\n│ │\r\n│ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in _call_impl │\r\n│ │\r\n│ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │\r\n│ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │\r\n│ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │\r\n│ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │\r\n│ 1502 │ │ # Do not call functions when jit is used │\r\n│ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │\r\n│ 1504 │ │ backward_pre_hooks = [] │\r\n│ │\r\n│ /usr/local/lib/python3.10/dist-packages/accelerate/hooks.py:165 in new_forward │\r\n│ │\r\n│ 162 │ │ │ with torch.no_grad(): │\r\n│ 163 │ │ │ │ output = old_forward(*args, **kwargs) │\r\n│ 164 │ │ else: │\r\n│ ❱ 165 │ │ │ output = old_forward(*args, **kwargs) │\r\n│ 166 │ │ return module._hf_hook.post_forward(module, output) │\r\n│ 167 │ │\r\n│ 168 │ module.forward = new_forward │\r\n│ │\r\n│ /usr/local/lib/python3.10/dist-packages/transformers/models/gpt_bigcode/modeling_gpt_bigcode.py: │\r\n│ 809 in forward │\r\n│ │\r\n│ 806 │ │ \"\"\" │\r\n│ 807 │ │ return_dict = return_dict if return_dict is not None else self.config.use_return │\r\n│ 808 │ │ │\r\n│ ❱ 809 │ │ transformer_outputs = self.transformer( │\r\n│ 810 │ │ │ input_ids, │\r\n│ 811 │ │ │ past_key_values=past_key_values, │\r\n│ 812 │ │ │ attention_mask=attention_mask, │\r\n│ │\r\n│ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in _call_impl │\r\n│ │\r\n│ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │\r\n│ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │\r\n│ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │\r\n│ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │\r\n│ 1502 │ │ # Do not call functions when jit is used │\r\n│ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │\r\n│ 1504 │ │ backward_pre_hooks = [] │\r\n│ │\r\n│ /usr/local/lib/python3.10/dist-packages/transformers/models/gpt_bigcode/modeling_gpt_bigcode.py: │\r\n│ 674 in forward │\r\n│ │\r\n│ 671 │ │ │ │ │ encoder_attention_mask, │\r\n│ 672 │ │ │ │ ) │\r\n│ 673 │ │ │ else: │\r\n│ ❱ 674 │ │ │ │ outputs = block( │\r\n│ 675 │ │ │ │ │ hidden_states, │\r\n│ 676 │ │ │ │ │ layer_past=layer_past, │\r\n│ 677 │ │ │ │ │ attention_mask=attention_mask, │\r\n│ │\r\n│ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in _call_impl │\r\n│ │\r\n│ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │\r\n│ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │\r\n│ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │\r\n│ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │\r\n│ 1502 │ │ # Do not call functions when jit is used │\r\n│ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │\r\n│ 1504 │ │ backward_pre_hooks = [] │\r\n│ │\r\n│ /usr/local/lib/python3.10/dist-packages/accelerate/hooks.py:165 in new_forward │\r\n│ │\r\n│ 162 │ │ │ with torch.no_grad(): │\r\n│ 163 │ │ │ │ output = old_forward(*args, **kwargs) │\r\n│ 164 │ │ else: │\r\n│ ❱ 165 │ │ │ output = old_forward(*args, **kwargs) │\r\n│ 166 │ │ return module._hf_hook.post_forward(module, output) │\r\n│ 167 │ │\r\n│ 168 │ module.forward = new_forward │\r\n│ │\r\n│ /usr/local/lib/python3.10/dist-packages/transformers/models/gpt_bigcode/modeling_gpt_bigcode.py: │\r\n│ 315 in forward │\r\n│ │\r\n│ 312 │ │ Tuple[torch.Tensor], Tuple[torch.Tensor, torch.Tensor], Tuple[torch.Tensor, torc │\r\n│ 313 │ ]: │\r\n│ 314 │ │ residual = hidden_states │\r\n│ ❱ 315 │ │ hidden_states = self.ln_1(hidden_states) │\r\n│ 316 │ │ attn_outputs = self.attn( │\r\n│ 317 │ │ │ hidden_states, │\r\n│ 318 │ │ │ layer_past=layer_past, │\r\n│ │\r\n│ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in _call_impl │\r\n│ │\r\n│ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │\r\n│ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │\r\n│ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │\r\n│ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │\r\n│ 1502 │ │ # Do not call functions when jit is used │\r\n│ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │\r\n│ 1504 │ │ backward_pre_hooks = [] │\r\n│ │\r\n│ /usr/local/lib/python3.10/dist-packages/accelerate/hooks.py:160 in new_forward │\r\n│ │\r\n│ 157 │ │\r\n│ 158 │ @functools.wraps(old_forward) │\r\n│ 159 │ def new_forward(*args, **kwargs): │\r\n│ ❱ 160 │ │ args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs) │\r\n│ 161 │ │ if module._hf_hook.no_grad: │\r\n│ 162 │ │ │ with torch.no_grad(): │\r\n│ 163 │ │ │ │ output = old_forward(*args, **kwargs) │\r\n│ │\r\n│ /usr/local/lib/python3.10/dist-packages/accelerate/hooks.py:282 in pre_forward │\r\n│ │\r\n│ 279 │ │ │ for name, _ in named_module_tensors( │\r\n│ 280 │ │ │ │ module, include_buffers=self.offload_buffers, recurse=self.place_submodu │\r\n│ 281 │ │ │ ): │\r\n│ ❱ 282 │ │ │ │ set_module_tensor_to_device(module, name, self.execution_device, value=s │\r\n│ 283 │ │ │\r\n│ 284 │ │ return send_to_device(args, self.execution_device), send_to_device( │\r\n│ 285 │ │ │ kwargs, self.execution_device, skip_keys=self.skip_keys │\r\n│ │\r\n│ /usr/local/lib/python3.10/dist-packages/accelerate/utils/offload.py:123 in __getitem__ │\r\n│ │\r\n│ 120 │ │ self.prefix = prefix │\r\n│ 121 │ │\r\n│ 122 │ def __getitem__(self, key): │\r\n│ ❱ 123 │ │ return self.dataset[f\"{self.prefix}{key}\"] │\r\n│ 124 │ │\r\n│ 125 │ def __iter__(self): │\r\n│ 126 │ │ return iter([key for key in self.dataset if key.startswith(self.prefix)]) │\r\n│ │\r\n│ /usr/local/lib/python3.10/dist-packages/accelerate/utils/offload.py:176 in __getitem__ │\r\n│ │\r\n│ 173 │ │ │ │ raise ImportError(\"These offloaded weights require the use of safetensor │\r\n│ 174 │ │ │ │\r\n│ 175 │ │ │ if \"SAFETENSORS_FAST_GPU\" not in os.environ: │\r\n│ ❱ 176 │ │ │ │ logger.info(\"Enabling fast loading with safetensors by setting `SAFETENS │\r\n│ 177 │ │ │ │ os.environ[\"SAFETENSORS_FAST_GPU\"] = \"1\" │\r\n│ 178 │ │ │ │\r\n│ 179 │ │ │ from safetensors import safe_open │\r\n│ │\r\n│ /usr/lib/python3.10/logging/__init__.py:1841 in info │\r\n│ │\r\n│ 1838 │ │ \"\"\" │\r\n│ 1839 │ │ Delegate an info call to the underlying logger. │\r\n│ 1840 │ │ \"\"\" │\r\n│ ❱ 1841 │ │ self.log(INFO, msg, *args, **kwargs) │\r\n│ 1842 │ │\r\n│ 1843 │ def warning(self, msg, *args, **kwargs): │\r\n│ 1844 │ │ \"\"\" │\r\n│ │\r\n│ /usr/local/lib/python3.10/dist-packages/accelerate/logging.py:51 in log │\r\n│ │\r\n│ 48 │ │ `in_order` is ignored if `main_process_only` is passed. │\r\n│ 49 │ │ \"\"\" │\r\n│ 50 │ │ if PartialState._shared_state == {}: │\r\n│ ❱ 51 │ │ │ raise RuntimeError( │\r\n│ 52 │ │ │ │ \"You must initialize the accelerate state by calling either `PartialStat │\r\n│ 53 │ │ │ ) │\r\n│ 54 │ │ main_process_only = kwargs.pop(\"main_process_only\", True) │\r\n╰──────────────────────────────────────────────────────────────────────────────────────────────────╯\r\nRuntimeError: You must initialize the accelerate state by calling either `PartialState()` or `Accelerator()` before\r\nusing the logging utility.\r\n```", "Thanks, I can see where this stems from. As a temporary workaround while we fix the issue, your can set `SAFETENSORS_FAST_GPU=1` in your environment to avoid this error.", "@sgugger we can disable that flag for more recent versions, it's not used anymore btw. (safetensors>0.3.0)", "> Thanks, I can see where this stems from. As a temporary workaround while we fix the issue, your can set `SAFETENSORS_FAST_GPU=1` in your environment to avoid this error.\r\n\r\nI tried this and I am no longer getting the error. However, the below line from the sample code shared earlier takes forever to execute. (waited for 16 minutes before interrupting it)\r\n`outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.2, top_k=50, top_p=0.95, eos_token_id=49155)`\r\n\r\nAny suggesstions on how I can debug it.", "If you're trying to run this super large model on a small maching, everything will end up being offloading to CPU + DISK, making everything EXTREMELY slow. There's no easy solution on small hardware\r\n\r\nTry setting `max_new_tokens=1` and letting it run several minutes It should give you the next token.\r\n\r\nSubsequent tokens are faster to get than the first one, however it should be of the same order of latency.", "> If you're trying to run this super large model on a small maching, everything will end up being offloading to CPU + DISK, making everything EXTREMELY slow. There's no easy solution on small hardware\r\n> \r\n> Try setting `max_new_tokens=1` and letting it run several minutes It should give you the next token.\r\n> \r\n> Subsequent tokens are faster to get than the first one, however it should be of the same order of latency.\r\n\r\nThis worked and also helped understand the problem. Thank you so much." ]
1,686
1,687
1,687
NONE
null
### System Info - `transformers` version: 4.30.2 - Platform: Linux-5.15.107+-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu) - Jax version: 0.4.10 - JaxLib version: 0.4.10 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @Narsil @ArthurZucker ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I am using Google colab to run starchat-beta model from here https://huggingface.co/HuggingFaceH4/starchat-beta Google Colab Link: https://colab.research.google.com/drive/1I1-zAY3AYNEiZ9Lk-35yqNrOncLFUsSo#scrollTo=-vH0ityRx9u4 **Step1**: Installed required library on colab ``` !pip install transformers !pip install accelerate !pip install xformers ``` **Step2:** Run below sample code from the model card of starchat-beta model ``` import torch from transformers import pipeline pipe = pipeline("text-generation", model="HuggingFaceH4/starchat-beta", torch_dtype=torch.bfloat16, device_map="auto") prompt_template = "<|system|>\n<|end|>\n<|user|>\n{query}<|end|>\n<|assistant|>" prompt = prompt_template.format(query="How do I sort a list in Python?") outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.2, top_k=50, top_p=0.95, eos_token_id=49155) ``` **Issue:** The last line is producing the following error > RuntimeError: You must initialize the accelerate state by calling either `PartialState()` or `Accelerator()` before using the logging utility. ### Expected behavior Text output similar to the below one (may or may not be exact) ``` # You can sort a list in Python by using the sort() method. Here's an example:\n\n```\nnumbers = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5]\nnumbers.sort()\nprint(numbers)\n```\n\nThis will sort the list in place and print the sorted list. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24303/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24303/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24302
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24302/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24302/comments
https://api.github.com/repos/huggingface/transformers/issues/24302/events
https://github.com/huggingface/transformers/pull/24302
1,758,708,984
PR_kwDOCUB6oc5TFyE8
24,302
[Docs] Fix the paper URL for MMS model
{ "login": "hitchhicker", "id": 7930497, "node_id": "MDQ6VXNlcjc5MzA0OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7930497?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hitchhicker", "html_url": "https://github.com/hitchhicker", "followers_url": "https://api.github.com/users/hitchhicker/followers", "following_url": "https://api.github.com/users/hitchhicker/following{/other_user}", "gists_url": "https://api.github.com/users/hitchhicker/gists{/gist_id}", "starred_url": "https://api.github.com/users/hitchhicker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hitchhicker/subscriptions", "organizations_url": "https://api.github.com/users/hitchhicker/orgs", "repos_url": "https://api.github.com/users/hitchhicker/repos", "events_url": "https://api.github.com/users/hitchhicker/events{/privacy}", "received_events_url": "https://api.github.com/users/hitchhicker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for the quick review! @amyeroberts Would you please merge it?" ]
1,686
1,686
1,686
CONTRIBUTOR
null
# What does this PR do? Fixes the paper link for the mms model, the wrong URL is for the paper `XLS-R: SELF-SUPERVISED CROSS-LINGUAL SPEECH REPRESENTATION LEARNING AT SCALE` not `Scaling Speech Technology to 1,000+ Languages`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24302/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24302/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24302", "html_url": "https://github.com/huggingface/transformers/pull/24302", "diff_url": "https://github.com/huggingface/transformers/pull/24302.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24302.patch", "merged_at": 1686840350000 }
https://api.github.com/repos/huggingface/transformers/issues/24301
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24301/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24301/comments
https://api.github.com/repos/huggingface/transformers/issues/24301/events
https://github.com/huggingface/transformers/pull/24301
1,758,690,902
PR_kwDOCUB6oc5TFuIL
24,301
Fix functional TF Whisper and modernize tests
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Update: The test is still very slow in some cases in the CI. I'm going to mark it as `@slow` before merging this PR after all, but I'll leave it as-is for now so I can fix any issues it raises before merging.", "Quick note: I marked the test as slow (it was slow before too). I ran everything locally and all models passed, so hopefully it should still look good on the nightly CI after this is merged.", "cc @amyeroberts for core maintainer review and we should be good to go! (Also sorry Amy if I'm spamming you with lots of PRs this week)", "Thanks for the ping re-regression. This is way too big of a PR to be included in a patch though so I suggest making a separate small PR for the part that would need to go in a patch, or decide this won't go in a patch and tell users to wait for 4.31.", "Ah, sorry! I didn't mean to imply there'd be a patch release - this doesn't affect too many people (only a very specific subset of Whisper users who are exporting models using the functional API), so it should be fine to wait until 4.31. I can just tell affected people to install from `main` until then." ]
1,686
1,686
1,686
MEMBER
null
There was a regression in 4.30 that affects functional construction of Whisper models in certain cases, my bad! In an attempt to avoid this in future, I modified the `test_compile_tf_model` test. These tests were quite old and weren't that relevant for how we do things now, and were also quite slow. I pared the test down to the actual thing we want to test (functional construction with `tf.keras.Input` and potentially-unknown shape dimensions), which should make it fast enough to run in the live CI, as well as giving us more useful info about regressions like this in future. Fixes #24291
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24301/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24301/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24301", "html_url": "https://github.com/huggingface/transformers/pull/24301", "diff_url": "https://github.com/huggingface/transformers/pull/24301.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24301.patch", "merged_at": 1686923024000 }
https://api.github.com/repos/huggingface/transformers/issues/24300
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24300/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24300/comments
https://api.github.com/repos/huggingface/transformers/issues/24300/events
https://github.com/huggingface/transformers/pull/24300
1,758,687,692
PR_kwDOCUB6oc5TFtbG
24,300
[`SwitchTransformers`] Fix return values
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
COLLABORATOR
null
# What does this PR do? The previous version of the code would always return `None` values for the router losses, since the `if output_router_probs` was wrapper around `if labels is not None`. But router logits can still be computed wthout labels. Also returns tensors instead of None, this follows our usual API, and is less prone to errors.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24300/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24300/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24300", "html_url": "https://github.com/huggingface/transformers/pull/24300", "diff_url": "https://github.com/huggingface/transformers/pull/24300.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24300.patch", "merged_at": 1686922834000 }
https://api.github.com/repos/huggingface/transformers/issues/24299
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24299/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24299/comments
https://api.github.com/repos/huggingface/transformers/issues/24299/events
https://github.com/huggingface/transformers/pull/24299
1,758,680,560
PR_kwDOCUB6oc5TFr3l
24,299
Make `can_generate` as class method
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
COLLABORATOR
null
# What does this PR do? Make `can_generate` as class method, so we can check a model (class) can generate or not without loading/creating a model instance. (The goal of this PR is not to address the issue regarding how to check `is_encoder`, `is_decoder` etc. discussed offline).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24299/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24299/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24299", "html_url": "https://github.com/huggingface/transformers/pull/24299", "diff_url": "https://github.com/huggingface/transformers/pull/24299.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24299.patch", "merged_at": 1686846699000 }
https://api.github.com/repos/huggingface/transformers/issues/24298
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24298/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24298/comments
https://api.github.com/repos/huggingface/transformers/issues/24298/events
https://github.com/huggingface/transformers/pull/24298
1,758,645,149
PR_kwDOCUB6oc5TFkNK
24,298
deepspeed init during eval fix
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for working on this. I'm currently running into the same issue with V 4.30.2. Any idea when the the new V with this fix will be released?" ]
1,686
1,688
1,686
CONTRIBUTOR
null
# What does this PR do? 1. Fixes #24294 as DS Z2 and Z1 stages don't modify the model and the check was exhibiting wrong behaviour.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24298/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24298/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24298", "html_url": "https://github.com/huggingface/transformers/pull/24298", "diff_url": "https://github.com/huggingface/transformers/pull/24298.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24298.patch", "merged_at": 1686835029000 }
https://api.github.com/repos/huggingface/transformers/issues/24297
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24297/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24297/comments
https://api.github.com/repos/huggingface/transformers/issues/24297/events
https://github.com/huggingface/transformers/pull/24297
1,758,586,403
PR_kwDOCUB6oc5TFXP3
24,297
Fix 'local_rank' AttiributeError in Trainer class
{ "login": "mocobeta", "id": 1825333, "node_id": "MDQ6VXNlcjE4MjUzMzM=", "avatar_url": "https://avatars.githubusercontent.com/u/1825333?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mocobeta", "html_url": "https://github.com/mocobeta", "followers_url": "https://api.github.com/users/mocobeta/followers", "following_url": "https://api.github.com/users/mocobeta/following{/other_user}", "gists_url": "https://api.github.com/users/mocobeta/gists{/gist_id}", "starred_url": "https://api.github.com/users/mocobeta/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mocobeta/subscriptions", "organizations_url": "https://api.github.com/users/mocobeta/orgs", "repos_url": "https://api.github.com/users/mocobeta/repos", "events_url": "https://api.github.com/users/mocobeta/events{/privacy}", "received_events_url": "https://api.github.com/users/mocobeta/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This works for me with python 3.10.12 (Google Colab runtime).", "cc @muellerzr ", "_The documentation is not available anymore as the PR was closed or merged._", "Friendly ping @muellerzr " ]
1,686
1,687
1,687
CONTRIBUTOR
null
# What does this PR do? This PR fixes `AttributeError: 'Trainer' object has no attribute 'local_rank'`. Please see the discussion at https://github.com/huggingface/transformers/pull/23681 for details. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24297/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24297/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24297", "html_url": "https://github.com/huggingface/transformers/pull/24297", "diff_url": "https://github.com/huggingface/transformers/pull/24297.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24297.patch", "merged_at": 1687801109000 }
https://api.github.com/repos/huggingface/transformers/issues/24296
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24296/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24296/comments
https://api.github.com/repos/huggingface/transformers/issues/24296/events
https://github.com/huggingface/transformers/pull/24296
1,758,468,701
PR_kwDOCUB6oc5TE9ZQ
24,296
[EnCodec] Changes for 32kHz ckpt
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,687
1,686
CONTRIBUTOR
null
# What does this PR do? Updates the EnCodec config and modelling code to allow two options for the residual connection in the Resnet block: 1. Pass the residual through a Conv1d 2. Apply the residual directly (identity) => this change is required to use the latest 32kHz EnCodec model in the Music Gen model (#24109). It is tested for with a fast test, and confirmed to match the original implementation.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24296/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24296", "html_url": "https://github.com/huggingface/transformers/pull/24296", "diff_url": "https://github.com/huggingface/transformers/pull/24296.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24296.patch", "merged_at": 1686836179000 }
https://api.github.com/repos/huggingface/transformers/issues/24295
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24295/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24295/comments
https://api.github.com/repos/huggingface/transformers/issues/24295/events
https://github.com/huggingface/transformers/issues/24295
1,758,458,320
I_kwDOCUB6oc5oz_HQ
24,295
Add training support for EnCodec
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "id": 2392046359, "node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue", "name": "Good Second Issue", "color": "dd935a", "default": false, "description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!" }, { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
{ "login": "Swastyy", "id": 64654203, "node_id": "MDQ6VXNlcjY0NjU0MjAz", "avatar_url": "https://avatars.githubusercontent.com/u/64654203?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Swastyy", "html_url": "https://github.com/Swastyy", "followers_url": "https://api.github.com/users/Swastyy/followers", "following_url": "https://api.github.com/users/Swastyy/following{/other_user}", "gists_url": "https://api.github.com/users/Swastyy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Swastyy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Swastyy/subscriptions", "organizations_url": "https://api.github.com/users/Swastyy/orgs", "repos_url": "https://api.github.com/users/Swastyy/repos", "events_url": "https://api.github.com/users/Swastyy/events{/privacy}", "received_events_url": "https://api.github.com/users/Swastyy/received_events", "type": "User", "site_admin": false }
[ { "login": "Swastyy", "id": 64654203, "node_id": "MDQ6VXNlcjY0NjU0MjAz", "avatar_url": "https://avatars.githubusercontent.com/u/64654203?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Swastyy", "html_url": "https://github.com/Swastyy", "followers_url": "https://api.github.com/users/Swastyy/followers", "following_url": "https://api.github.com/users/Swastyy/following{/other_user}", "gists_url": "https://api.github.com/users/Swastyy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Swastyy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Swastyy/subscriptions", "organizations_url": "https://api.github.com/users/Swastyy/orgs", "repos_url": "https://api.github.com/users/Swastyy/repos", "events_url": "https://api.github.com/users/Swastyy/events{/privacy}", "received_events_url": "https://api.github.com/users/Swastyy/received_events", "type": "User", "site_admin": false } ]
[ "Hi @ArthurZucker , I want to try this, please assign it to me. Thanks.", "Sure! Feel free to open a PR and ping me", "@Swastyy @ArthurZucker Let me know if you are looking for any support. I would also like to help with this if possible. Thanks!", "Seems like he did not link a PR, feel free to synch and ping me for any help! Even a draft is good! ", "Hi @ArthurZucker can you let me know of the overall changes that have to be made. I see the EnCodec model already implemented in transformers, so to integrate it with Trainer what are the additional requirements?\r\n", "The idea is mostly to integrate the loss computation for the VQVAE! Trainer might not work as the model does not use attention, but the target should be to have the same loss as the original model ! ", "Thanks Arthur. \r\n\r\nI read through the paper (https://arxiv.org/pdf/2210.13438.pdf) and the existing code, and here is my impression on the work breakdown. Does any of this make sense or am I going in a totally wrong direction?\r\n\r\nThe loss function detailed in the paper (equation 4) is a combination of (1) the reconstruction loss (over frequency and time domains), (2) the discriminative loss (requires the discriminator), and (3) the VQ commitment loss (the quantizer loss).\r\n\r\n(1) The reconstruction loss is computed using the original audio as the label, and we basically need to apply certain time and frequency transformations to the input/output and compute the L1/L2 distances between them.\r\n\r\n(2) The discriminative loss requires a discriminator. As far as I can tell, this hasn't been ported/implemented yet and we'll need to do it if we wanted to compute the loss as stated in the paper (msstftd.py from facebookresearch). We'll need to hook up the discriminator in the training code somewhere (is there any pretrained discriminator here?). Also, it's unclear to me whether we can train the discriminator and the model/generator at the same time (I'm assuming not, and we'll need to train one at a time).\r\n\r\n(3) The VQ commitment loss is from the quantizer. It looks like it's summing up the losses across all the residual steps. Are we supposed to train the quantizer at the same time as the encoder/decoders? Or should we train them at different times?\r\n\r\nIn addition to the general loss function, the paper introduced a balancer (balancer.py) that weighs the reconstruction, discriminative, and commitment losses differently. We would also need to import the balancer code if we want this special balancer.", "Makes sense to me! I think you can focus simply on returning the loss for the modules. The order of training is not that important (when implementing the module wise loss) since you don't need to train (but compare output losses) until you have eveything!\r\n\r\nFor the discriminator, you can live it in the training file! It should be pretty small and that's usually how we do things 🤗 !\r\nThe order of training, on what is frozen when should be in the paper/original codebase, have not looked it up! ", "I'll attempt to code up (3) VQ commitment loss first then. I'll reach out if I get stuck or run into any issues. Thanks!", "I added an initial draft here: https://github.com/huggingface/transformers/commit/4f697be0b62c4f3b0401ccbd00d1d46aac81906d\r\n\r\nCan you take a look and let me know what you think? Thanks", "FYI I will be traveling in July, so won't be as available that month. ", "Sure, would you mind opening a proper PR? Would be easier to test locally and visualize and follow changes! ", "So cool, I reproduced the code and release the code. If you have any question, we can solve together. https://github.com/NoFish-528/encodec-pytorch @hackyon @ArthurZucker In this work, I haven't add balancer, it's difficult for me... Hope you can successful" ]
1,686
1,693
null
COLLABORATOR
null
### Feature request Would be cool to add training support for the EnCodec model. Not entirely sure if we can easily make it compatible with Trainer, so this can be a good second issue I think. ### Motivation … ### Your contribution …
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24295/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24295/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/24294
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24294/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24294/comments
https://api.github.com/repos/huggingface/transformers/issues/24294/events
https://github.com/huggingface/transformers/issues/24294
1,758,420,762
I_kwDOCUB6oc5oz18a
24,294
Error during evaluation using deepspeed zero stage 2
{ "login": "shahules786", "id": 25312635, "node_id": "MDQ6VXNlcjI1MzEyNjM1", "avatar_url": "https://avatars.githubusercontent.com/u/25312635?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shahules786", "html_url": "https://github.com/shahules786", "followers_url": "https://api.github.com/users/shahules786/followers", "following_url": "https://api.github.com/users/shahules786/following{/other_user}", "gists_url": "https://api.github.com/users/shahules786/gists{/gist_id}", "starred_url": "https://api.github.com/users/shahules786/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shahules786/subscriptions", "organizations_url": "https://api.github.com/users/shahules786/orgs", "repos_url": "https://api.github.com/users/shahules786/repos", "events_url": "https://api.github.com/users/shahules786/events{/privacy}", "received_events_url": "https://api.github.com/users/shahules786/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, could you try the latest release and let us know if that resolves the issues?", "Getting `ModuleNotFoundError: No module named 'funtuner'` when trying to run `python3 funtuner/trainer.py`", "Hi @pacman100 , can you add the PYTHONPATH and try again? \r\n` export PYTHONPATH=\"${PYTHONPATH}:/your-path/Funtuner\" `\r\nAlso checkout the `dev-train` branch. The issue remains the same with the latest version. I tried that. ", "Also, on how many GPUs are you running this? \r\n", "V 100 16GB - 1. ", "with one GPU, there won't be any sharing of the optim states and gradients, therefore it will be same as DDP. So a bit confused there\r\n", "Also, getting various issues when running with 2 GPUs:\r\n\r\nmain-branch\r\n```\r\nRuntimeError: CUDA error: device-side assert triggered\r\nCUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.\r\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.\r\n```\r\n\r\ndev-train branch\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/sourab/Funtuner/funtuner/trainer.py\", line 28, in train\r\n os.mkdir(cfg.log_dir)\r\nFileNotFoundError: [Errno 2] No such file or directory: '/scratch/c.scmse/Funtuner-logs'\r\n```\r\n\r\n", "The main branch is not updated, please stick to dev-train for now. For fixing this error, please change the `log_dir` to your folder [here](https://github.com/explodinggradients/Funtuner/blob/c4e66209d5ee276a7eb8caf582435f1eaafbf18f/funtuner/config/config.yaml#L4) also you might want to set `log_wandb=False` \r\nI have run this branch on single and multi GPU settings. Although now I use only single GPU for redpajama-3B model. ", "> with one GPU, there won't be any sharing of the optim states and gradients, therefore it will be same as DDP. So a bit confused there\r\n\r\nI think in single GPU + Deepspeed zero 2 I can benefit from zero offloading and smart GPU mem management allowing me to fit larger models/batch sizes. ", "above PR should resolve the DS issue", "I'll try it out one merged. " ]
1,686
1,686
1,686
NONE
null
### System Info transformers v4.30.0 python 3.8 Training using `deepspeed stage zero 2` hit an error when in evaluation/prediction loop. Both prediction/evaluation initiate [deepspeed with inference=True] (https://github.com/huggingface/transformers/blob/6793f0cfe0006d7cedfb9b6081f55d9d38eae18a/src/transformers/trainer.py#L3045) and hence now can't run inference for anything other than stage 3 (inference not supported for zero 1/2). So my question is how to run deepspeed zero 2? My code is [here](https://github.com/explodinggradients/Funtuner/blob/main/funtuner/trainer.py) Error stack `Traceback (most recent call last): File "funtuner/trainer.py", line 98, in train trainer.train() File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/transformers/trainer.py", line 1645, in train return inner_training_loop( File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/transformers/trainer.py", line 2011, in _inner_training_loop self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/transformers/trainer.py", line 2312, in _maybe_log_save_evaluate metrics = self.evaluate(ignore_keys=ignore_keys_for_eval) File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/transformers/trainer.py", line 3043, in evaluate output = eval_loop( File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/transformers/trainer.py", line 3769, in prediction_loop _, _ = deepspeed_init(self, num_training_steps=0, inference=True) File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/transformers/deepspeed.py", line 351, in deepspeed_init raise ValueError("ZeRO inference only makes sense with ZeRO Stage 3 - please adjust your config") ValueError: ZeRO inference only makes sense with ZeRO Stage 3 - please adjust your config` ### Who can help? @pacman100 ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction My code is [here](https://github.com/explodinggradients/Funtuner/blob/main/funtuner/trainer.py) Run `python3 funtuner/trainer.py` ### Expected behavior Run evaluation loop without any error using deepspeed stage 1 and 2.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24294/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24294/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24293
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24293/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24293/comments
https://api.github.com/repos/huggingface/transformers/issues/24293/events
https://github.com/huggingface/transformers/pull/24293
1,758,396,816
PR_kwDOCUB6oc5TEt2B
24,293
[fix] bug in BatchEncoding.__getitem__
{ "login": "flybird11111", "id": 37931082, "node_id": "MDQ6VXNlcjM3OTMxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/37931082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/flybird11111", "html_url": "https://github.com/flybird11111", "followers_url": "https://api.github.com/users/flybird11111/followers", "following_url": "https://api.github.com/users/flybird11111/following{/other_user}", "gists_url": "https://api.github.com/users/flybird11111/gists{/gist_id}", "starred_url": "https://api.github.com/users/flybird11111/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/flybird11111/subscriptions", "organizations_url": "https://api.github.com/users/flybird11111/orgs", "repos_url": "https://api.github.com/users/flybird11111/repos", "events_url": "https://api.github.com/users/flybird11111/events{/privacy}", "received_events_url": "https://api.github.com/users/flybird11111/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24293/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24293/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24293", "html_url": "https://github.com/huggingface/transformers/pull/24293", "diff_url": "https://github.com/huggingface/transformers/pull/24293.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24293.patch", "merged_at": 1686828817000 }
https://api.github.com/repos/huggingface/transformers/issues/24292
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24292/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24292/comments
https://api.github.com/repos/huggingface/transformers/issues/24292/events
https://github.com/huggingface/transformers/pull/24292
1,758,347,978
PR_kwDOCUB6oc5TEjPP
24,292
[Docs] Improve docs for MMS loading of other languages
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Clarifies #24223 and improves docs of MMS.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24292/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24292/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24292", "html_url": "https://github.com/huggingface/transformers/pull/24292", "diff_url": "https://github.com/huggingface/transformers/pull/24292.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24292.patch", "merged_at": 1686832173000 }
https://api.github.com/repos/huggingface/transformers/issues/24291
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24291/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24291/comments
https://api.github.com/repos/huggingface/transformers/issues/24291/events
https://github.com/huggingface/transformers/issues/24291
1,758,320,580
I_kwDOCUB6oc5ozdfE
24,291
Cannot serialize Whisper decoder layer in a keras model
{ "login": "perretv", "id": 7593625, "node_id": "MDQ6VXNlcjc1OTM2MjU=", "avatar_url": "https://avatars.githubusercontent.com/u/7593625?v=4", "gravatar_id": "", "url": "https://api.github.com/users/perretv", "html_url": "https://github.com/perretv", "followers_url": "https://api.github.com/users/perretv/followers", "following_url": "https://api.github.com/users/perretv/following{/other_user}", "gists_url": "https://api.github.com/users/perretv/gists{/gist_id}", "starred_url": "https://api.github.com/users/perretv/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/perretv/subscriptions", "organizations_url": "https://api.github.com/users/perretv/orgs", "repos_url": "https://api.github.com/users/perretv/repos", "events_url": "https://api.github.com/users/perretv/events{/privacy}", "received_events_url": "https://api.github.com/users/perretv/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @perretv yes, this looks like a regression. Investigating now, hopefully we can make a quick patch!", "Hi @perretv, the fix has now been merged. You can `pip install git+https://github.com/huggingface/transformers.git` to install from `main` and use it immediately. It'll be included in the next 4.31 release of `transformers`, after which you can go back to normal pip installs." ]
1,686
1,686
1,686
NONE
null
### System Info - `transformers` version: 4.30.2 - Platform: Linux-4.19.0-24-cloud-amd64-x86_64-with-glibc2.28 - Python version: 3.11.3 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @Rocketknight1 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction Changes introduced in #23760 and released in transformers [v4.30.0](https://github.com/huggingface/transformers/releases/tag/v4.30.0) are breaking the ability to serialize a keras model that contains a Whisper decoder layer. Here is a minimal reproducible example: ```python from transformers import TFWhisperModel import tensorflow as tf whisper = TFWhisperModel.from_pretrained("openai/whisper-tiny") inp = tf.keras.Input((80, 3000)) stack = whisper.get_encoder()(inp) decoder_input_ids = tf.ones((tf.shape(inp)[0], 1), dtype=tf.int32)* whisper.config.decoder_start_token_id stack = whisper.get_decoder()(input_ids=decoder_input_ids, encoder_hidden_states=stack.last_hidden_state) model = tf.keras.Model(inp, stack) model.summary() model.save("whisper-tiny-custom") ``` With `transformers>=4.30.0`, this minimal reproducible example will raise the error: ``` OperatorNotAllowedInGraphError: Exception encountered when calling layer 'decoder' (type TFWhisperDecoder). Using a symbolic `tf.Tensor` as a Python `bool` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature. Call arguments received by layer 'decoder' (type TFWhisperDecoder): • self=tf.Tensor(shape=(1, 1), dtype=int32) • input_ids=None • attention_mask=None • position_ids=None • encoder_hidden_states=tf.Tensor(shape=(None, 1500, 384), dtype=float32) • head_mask=None • cross_attn_head_mask=None • past_key_values=None • inputs_embeds=None • use_cache=None • output_attentions=None • output_hidden_states=None • return_dict=None • training=True ``` ### Expected behavior Keras model serialization should succeed.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24291/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24291/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24290
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24290/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24290/comments
https://api.github.com/repos/huggingface/transformers/issues/24290/events
https://github.com/huggingface/transformers/issues/24290
1,757,956,294
I_kwDOCUB6oc5oyEjG
24,290
[Agents] RuntimeError: Invalid device string: 'hakurei/waifu-diffusion'
{ "login": "simonSlamka", "id": 51794014, "node_id": "MDQ6VXNlcjUxNzk0MDE0", "avatar_url": "https://avatars.githubusercontent.com/u/51794014?v=4", "gravatar_id": "", "url": "https://api.github.com/users/simonSlamka", "html_url": "https://github.com/simonSlamka", "followers_url": "https://api.github.com/users/simonSlamka/followers", "following_url": "https://api.github.com/users/simonSlamka/following{/other_user}", "gists_url": "https://api.github.com/users/simonSlamka/gists{/gist_id}", "starred_url": "https://api.github.com/users/simonSlamka/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/simonSlamka/subscriptions", "organizations_url": "https://api.github.com/users/simonSlamka/orgs", "repos_url": "https://api.github.com/users/simonSlamka/repos", "events_url": "https://api.github.com/users/simonSlamka/events{/privacy}", "received_events_url": "https://api.github.com/users/simonSlamka/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Device_string is not looking good.\r\n\r\n@LysandreJik might know better than me where this is coming from.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@LysandreJik afaik this still hasn't been fixed", "Hey @simonSlamka sorry for the late response. I think what you're trying to do is out of scope of what we're trying to do, we haven't designed the remote tools to work by specifying specific checkpoints in this way.\r\n\r\nFor this tool in particular (but this will need to be adapted to remote tools as these can be community contributed), here's how you would go about it:\r\n\r\n```py\r\nfrom transformers import load_tool\r\n\r\nimggen = load_tool(task_or_repo_id=\"huggingface-tools/text-to-image\")\r\nimggen.default_checkpoint = \"hakurei/waifu-diffusion\"\r\n\r\nimg = imggen(\"cute anime cat\")\r\n```\r\n\r\nI think the best way to leverage a given checkpoint here would be to clone the existing remote tool, replace the checkpoint, and update the generation settings so that they work best with the checkpoint you have in mind. So you would have a remote tool of your own, for example `simonSlamka/anime-text-to-image` that could be used to replace the existing image generation tool in the toolbox (or provided as an additional tool with anime as its focus).\r\n\r\nHope that helps!", "Hi,\r\n\r\nThanks a lot for your assistance and guidance. I will do what you suggested.\r\n\r\nHave a nice day!" ]
1,686
1,689
1,689
NONE
null
### System Info Hey! I have encountered an issue in the `agents` extra where calling `load_tool()` with `model_repo_id=<sd checkpoint model id>` causes a `RuntimeError` to occur. It would appear that the repo id is being used as a device_id when using the `text-to-image` tool: ``` `text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["id2label"]` will be overriden. Traceback (most recent call last): File "/home/simtoon/transformers/main.py", line 18, in <module> img = imggen("cute anime cat") File "/home/simtoon/.cache/huggingface/modules/transformers_modules/huggingface-tools/text-to-image/8a3d5357ffa541880148f2425c83ba89f7d56172/text_to_image.py", line 45, in __call__ self.setup() File "/home/simtoon/.cache/huggingface/modules/transformers_modules/huggingface-tools/text-to-image/8a3d5357ffa541880148f2425c83ba89f7d56172/text_to_image.py", line 36, in setup self.pipeline.to(self.device) File "/home/simtoon/transformers/venv/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 682, in to module.to(torch_device, torch_dtype) File "/home/simtoon/transformers/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1126, in to device, dtype, non_blocking, convert_to_format = torch._C._nn._parse_to(*args, **kwargs) RuntimeError: Invalid device string: 'hakurei/waifu-diffusion' ``` ``` - `transformers` version: 4.30.2 - Platform: Linux-5.4.0-139-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: N/A - Using distributed or parallel set-up in script?: N/A ``` ### Who can help? cc @Narsil ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction - install `transformers` - install `transformers[agents]` - call `load_tool` with `model_repo_id` set to a valid sd checkpoint repo id: `imggen = load_tool(task_or_repo_id="huggingface-tools/text-to-image", model_repo_id="hakurei/waifu-diffusion")` ... `img = imggen("cute anime cat")` - observe the exception ### Expected behavior Pull model from hf hub or load from local and use in the txt2img task
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24290/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24290/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24289
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24289/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24289/comments
https://api.github.com/repos/huggingface/transformers/issues/24289/events
https://github.com/huggingface/transformers/issues/24289
1,757,853,512
I_kwDOCUB6oc5oxrdI
24,289
XLMProphetNet returning different results when using padding
{ "login": "alexayalamcs", "id": 9994551, "node_id": "MDQ6VXNlcjk5OTQ1NTE=", "avatar_url": "https://avatars.githubusercontent.com/u/9994551?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexayalamcs", "html_url": "https://github.com/alexayalamcs", "followers_url": "https://api.github.com/users/alexayalamcs/followers", "following_url": "https://api.github.com/users/alexayalamcs/following{/other_user}", "gists_url": "https://api.github.com/users/alexayalamcs/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexayalamcs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexayalamcs/subscriptions", "organizations_url": "https://api.github.com/users/alexayalamcs/orgs", "repos_url": "https://api.github.com/users/alexayalamcs/repos", "events_url": "https://api.github.com/users/alexayalamcs/events{/privacy}", "received_events_url": "https://api.github.com/users/alexayalamcs/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Thanks for reporting! I think this is probably inherent to the model itself, but will see if theres a bug! Our integration tests don't cover this case, and seems like we don't have our common test for this model! ", "No updates yet! @Rocketknight1 if you have time to look at this", "Sorry for the delay - I'm going to try to take this one this week!", "I spent a while at this - I couldn't track down the exact cause, but I noted the following:\r\n- This issue affects `ProphetNet` as well as `XLMProphetNet`\r\n- The discrepancy occurs in the `decoder` module. There is a very small change in `decoder_hidden_states`, but this is probably a numerical issue, and disappears depending on the precision used. It is not the cause of the problem.\r\n- The real issue is in `decoder_ngram_hidden_states` which is initially identical but diverges wildly after the first layer.\r\n- `decoder_ngram_hidden_states` is unique to `ProphetNet` and gets mixed in unusual ways at each step.\r\n\r\nThe problem is that ProphetNet's n-gram decoder is so odd I'm not even sure there's an error here! I feel like an actual bug that caused cross-timestep mixing would have been extremely apparent when the model was first added to the Hub, and all equivalence tests would have failed. It's possible that because of the way it operates on n-grams, `ProphetNet` can't really guarantee that padding will have no effect?\r\n\r\nMaybe we open this issue to the community and see if anyone has time to read the original papers and step through the code to figure out if the behaviour here is intentional?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,686
1,700
1,700
NONE
null
### System Info - `transformers` version: 4.30.2 - Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (False) ### Who can help? @ArthurZucker @patrickvonplaten ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from transformers import XLMProphetNetTokenizer, XLMProphetNetForConditionalGeneration import torch tokenizer = XLMProphetNetTokenizer.from_pretrained("microsoft/xprophetnet-large-wiki100-cased-xglue-ntg") model = XLMProphetNetForConditionalGeneration.from_pretrained("microsoft/xprophetnet-large-wiki100-cased-xglue-ntg").eval() enc_input = tokenizer("test", return_tensors="pt") input_ids = enc_input.input_ids attention_mask = enc_input.attention_mask dec_input_ids = torch.tensor([[model.config.decoder_start_token_id]], dtype=torch.int64) dec_attention_mask = torch.tensor([[1]], dtype=torch.int64) dec_input_ids_pad = torch.tensor([[model.config.decoder_start_token_id, model.config.pad_token_id]], dtype=torch.int64) dec_attention_mask_pad = torch.tensor([[1, 0]], dtype=torch.int64) out1 = model( input_ids=input_ids, attention_mask=attention_mask, decoder_input_ids=dec_input_ids, decoder_attention_mask=dec_attention_mask ) out2 = model( input_ids=input_ids, attention_mask=attention_mask, decoder_input_ids=dec_input_ids_pad, decoder_attention_mask=dec_attention_mask_pad ) torch.isclose(out1.logits, out2.logits[:, 0], atol=1e-1).all() # false ``` ### Expected behavior XLMProphetNet is not returning the same output when the decoder input ids are padded. While the logits are quite similar (high cosine similarity), they are not the same which results in different losses and, in some cases, different predictions. The expected behavior is that the padded and unpadded version produce the same output.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24289/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24289/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24288
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24288/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24288/comments
https://api.github.com/repos/huggingface/transformers/issues/24288/events
https://github.com/huggingface/transformers/pull/24288
1,757,718,420
PR_kwDOCUB6oc5TCbM_
24,288
Beam search type
{ "login": "jprivera44", "id": 9093934, "node_id": "MDQ6VXNlcjkwOTM5MzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9093934?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jprivera44", "html_url": "https://github.com/jprivera44", "followers_url": "https://api.github.com/users/jprivera44/followers", "following_url": "https://api.github.com/users/jprivera44/following{/other_user}", "gists_url": "https://api.github.com/users/jprivera44/gists{/gist_id}", "starred_url": "https://api.github.com/users/jprivera44/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jprivera44/subscriptions", "organizations_url": "https://api.github.com/users/jprivera44/orgs", "repos_url": "https://api.github.com/users/jprivera44/repos", "events_url": "https://api.github.com/users/jprivera44/events{/privacy}", "received_events_url": "https://api.github.com/users/jprivera44/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Absolutely :)" ]
1,686
1,686
1,686
CONTRIBUTOR
null
# What does this PR do? This PR fixes issue #22856 , which has a type mismatch between the returned value, and the type hinting in the BeamSeachScorer, process function. Previously the type hint was set to a Tuple which was inconsistent with the returned Dict value. I've changed them to be consistent and ran the following test cases. 1. Performed print statements of the process annotations. - Before changes return type was typing.Tuple[torch.Tensor]} - After changes return type was typing.Dict[str, torch.Tensor] 2. Ran PyTest for the test_beam_search.py with all 6 test cases passing. # Motivation and Context To ensure high quality code within the HuggingFace repo. ## Who can review? @gante
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24288/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24288/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24288", "html_url": "https://github.com/huggingface/transformers/pull/24288", "diff_url": "https://github.com/huggingface/transformers/pull/24288.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24288.patch", "merged_at": 1686844082000 }
https://api.github.com/repos/huggingface/transformers/issues/24287
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24287/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24287/comments
https://api.github.com/repos/huggingface/transformers/issues/24287/events
https://github.com/huggingface/transformers/issues/24287
1,757,575,711
I_kwDOCUB6oc5ownof
24,287
cannot import name 'TextIteratorStreamer' from 'transformers'
{ "login": "benam2", "id": 51168654, "node_id": "MDQ6VXNlcjUxMTY4NjU0", "avatar_url": "https://avatars.githubusercontent.com/u/51168654?v=4", "gravatar_id": "", "url": "https://api.github.com/users/benam2", "html_url": "https://github.com/benam2", "followers_url": "https://api.github.com/users/benam2/followers", "following_url": "https://api.github.com/users/benam2/following{/other_user}", "gists_url": "https://api.github.com/users/benam2/gists{/gist_id}", "starred_url": "https://api.github.com/users/benam2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/benam2/subscriptions", "organizations_url": "https://api.github.com/users/benam2/orgs", "repos_url": "https://api.github.com/users/benam2/repos", "events_url": "https://api.github.com/users/benam2/events{/privacy}", "received_events_url": "https://api.github.com/users/benam2/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,686
1,686
1,686
NONE
null
### System Info Hi My code raises error when I want to import `TextIteratorStreamer` `from transformers import StoppingCriteria, StoppingCriteriaList, TextIteratorStreamer` I have transformer installed !pip install --upgrade transformers in databricks. If it helps this code runs successfully: python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I love you'))" Appreciate your input and help. ### Who can help? @ArthurZucker and @younesbelkada ### Expected behavior Just run without error
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24287/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24287/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24286
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24286/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24286/comments
https://api.github.com/repos/huggingface/transformers/issues/24286/events
https://github.com/huggingface/transformers/pull/24286
1,757,543,753
PR_kwDOCUB6oc5TB0dz
24,286
Update tokenizer_summary.mdx (grammar)
{ "login": "belladoreai", "id": 135602125, "node_id": "U_kgDOCBUfzQ", "avatar_url": "https://avatars.githubusercontent.com/u/135602125?v=4", "gravatar_id": "", "url": "https://api.github.com/users/belladoreai", "html_url": "https://github.com/belladoreai", "followers_url": "https://api.github.com/users/belladoreai/followers", "following_url": "https://api.github.com/users/belladoreai/following{/other_user}", "gists_url": "https://api.github.com/users/belladoreai/gists{/gist_id}", "starred_url": "https://api.github.com/users/belladoreai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/belladoreai/subscriptions", "organizations_url": "https://api.github.com/users/belladoreai/orgs", "repos_url": "https://api.github.com/users/belladoreai/repos", "events_url": "https://api.github.com/users/belladoreai/events{/privacy}", "received_events_url": "https://api.github.com/users/belladoreai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
CONTRIBUTOR
null
# What does this PR do? Update docs with minor grammar fix ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24286/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24286/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24286", "html_url": "https://github.com/huggingface/transformers/pull/24286", "diff_url": "https://github.com/huggingface/transformers/pull/24286.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24286.patch", "merged_at": 1686843108000 }
https://api.github.com/repos/huggingface/transformers/issues/24285
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24285/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24285/comments
https://api.github.com/repos/huggingface/transformers/issues/24285/events
https://github.com/huggingface/transformers/issues/24285
1,757,534,907
I_kwDOCUB6oc5owdq7
24,285
Missing T5X module
{ "login": "akku779", "id": 103017175, "node_id": "U_kgDOBiPq1w", "avatar_url": "https://avatars.githubusercontent.com/u/103017175?v=4", "gravatar_id": "", "url": "https://api.github.com/users/akku779", "html_url": "https://github.com/akku779", "followers_url": "https://api.github.com/users/akku779/followers", "following_url": "https://api.github.com/users/akku779/following{/other_user}", "gists_url": "https://api.github.com/users/akku779/gists{/gist_id}", "starred_url": "https://api.github.com/users/akku779/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akku779/subscriptions", "organizations_url": "https://api.github.com/users/akku779/orgs", "repos_url": "https://api.github.com/users/akku779/repos", "events_url": "https://api.github.com/users/akku779/events{/privacy}", "received_events_url": "https://api.github.com/users/akku779/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @akku779, thanks for raising this issue. \r\n\r\nIt seems that this is an issue with the installing of the t5x library, rather than one relating to transformers. Running the installation steps I was able to import `t5x` in a python session. \r\n\r\nGiven the `!` at the start of the pip commands, were these steps being run in a notebook or ipython environment? In which case, it's necessary to restart to environment in order for the updates to take affect. ", "@amyeroberts Thanks for getting back to me. I tried re-running my Colab notebook and I am not receiving the same error. Now, there seems to be some sort of dependency mismatch. \r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/content/convert_t5x_checkpoint_to_pytorch.py\", line 36, in <module>\r\n from t5x import checkpoints\r\n File \"/usr/local/lib/python3.10/dist-packages/t5x/__init__.py\", line 17, in <module>\r\n import t5x.adafactor\r\n File \"/usr/local/lib/python3.10/dist-packages/t5x/adafactor.py\", line 64, in <module>\r\n from t5x import utils\r\n File \"/usr/local/lib/python3.10/dist-packages/t5x/utils.py\", line 43, in <module>\r\n import orbax.checkpoint\r\n File \"/usr/local/lib/python3.10/dist-packages/orbax/checkpoint/__init__.py\", line 20, in <module>\r\n from orbax.checkpoint import checkpoint_utils\r\n File \"/usr/local/lib/python3.10/dist-packages/orbax/checkpoint/checkpoint_utils.py\", line 25, in <module>\r\n from orbax.checkpoint import type_handlers\r\n File \"/usr/local/lib/python3.10/dist-packages/orbax/checkpoint/type_handlers.py\", line 25, in <module>\r\n from jax.experimental.gda_serialization import serialization\r\nModuleNotFoundError: No module named 'jax.experimental.gda_serialization'\r\n```", "@akku779 From the traceback, this error is coming from the t5x module, and so isn't a transformers issue. I looks like there's a mismatch in your environment between the jax version installed and what the t5x library expects. ", "Fixed error by upgrading orbax" ]
1,686
1,686
1,686
NONE
null
### System Info I am using the T5X to Pytorch conversion script located in the transformers library to convert my pre-trained T5X model into a Pytorch model, however, upon running the script I receive the error below. ``` Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py", line 36, in <module> from t5x import checkpoints ModuleNotFoundError: No module named 't5x' ``` I have installed the necessary libraries by executing these statements. ``` !pip install transformers !pip install git+https://github.com/google-research/t5x ``` ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The official guidelines for the script is located in transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py To execute the script, the command below is run. ``` python3 [path_to_file]/convert_t5x_checkpoint_to_pytorch.py --t5x_checkpoint_path=$HOME/t5_1_1_small --config_file=config.json --pytorch_dump_path=$HOME/t5_1_1_small_pt ``` Where config.json is a config for t5-small (https://huggingface.co/t5-small/blob/main/config.json) ### Expected behavior The script should convert the checkpoint to a Pytorch model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24285/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24285/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24284
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24284/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24284/comments
https://api.github.com/repos/huggingface/transformers/issues/24284/events
https://github.com/huggingface/transformers/pull/24284
1,757,490,559
PR_kwDOCUB6oc5TBo4S
24,284
Split common test from core tests
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@ydshieh It's just the addition of new tests for modeling_utils or tokenization_utils which is really painful since those files are ~4k lines." ]
1,686
1,686
1,686
COLLABORATOR
null
# What does this PR do? This PR aims at cleaning the tests files like `test_modeling_common.py` which contain two distinct things: the common tester which is reused by all model tests, but also some core tests of `modeling_utils.py`. This PR split this file (and all similar) into 2: the common tester mixin stays in `test_modeling_common.py` and all other tests go to `test_modeling_utils.py`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24284/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24284/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24284", "html_url": "https://github.com/huggingface/transformers/pull/24284", "diff_url": "https://github.com/huggingface/transformers/pull/24284.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24284.patch", "merged_at": 1686828624000 }
https://api.github.com/repos/huggingface/transformers/issues/24283
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24283/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24283/comments
https://api.github.com/repos/huggingface/transformers/issues/24283/events
https://github.com/huggingface/transformers/issues/24283
1,757,322,322
I_kwDOCUB6oc5ovpxS
24,283
fp16_full_eval argument/flag in training arguments does not increase runtime or decrease memory footprint
{ "login": "garg-aayush", "id": 17342823, "node_id": "MDQ6VXNlcjE3MzQyODIz", "avatar_url": "https://avatars.githubusercontent.com/u/17342823?v=4", "gravatar_id": "", "url": "https://api.github.com/users/garg-aayush", "html_url": "https://github.com/garg-aayush", "followers_url": "https://api.github.com/users/garg-aayush/followers", "following_url": "https://api.github.com/users/garg-aayush/following{/other_user}", "gists_url": "https://api.github.com/users/garg-aayush/gists{/gist_id}", "starred_url": "https://api.github.com/users/garg-aayush/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/garg-aayush/subscriptions", "organizations_url": "https://api.github.com/users/garg-aayush/orgs", "repos_url": "https://api.github.com/users/garg-aayush/repos", "events_url": "https://api.github.com/users/garg-aayush/events{/privacy}", "received_events_url": "https://api.github.com/users/garg-aayush/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "From the command you are pasting, you are not doing any evaluation (`--do_eval` is not set to True). `--fp16_full_eval` is ignored during training, so your model will still take the same place and the same speed during training. It's just for evaluation.", "@sgugger Thanks for replying. Is there any way to run training in full `float16` rather than mixed-precision `float16`? \r\nSomething similar to lightning `Fabric` where you have two separate options of running training, for fp16 use `precision=\"16\"` and for mixed-fp16 use `precision=\"mixed-16\"`", "No there is none in the `Trainer`, since training in float16 does not converge in most cases.", "Thanks " ]
1,686
1,686
1,686
NONE
null
### System Info I am running a simple benchmarking test to see the speedup and accuracy changes that we see when we switch to `float16` (full float16, not mixed precision). For this purpose, I am using the example [semantic segmentation example script](https://github.com/huggingface/transformers/blob/main/examples/pytorch/semantic-segmentation/run_semantic_segmentation.py) This is how I am running code for float32 ```bash python run_semantic_segmentation.py --model_name_or_path nvidia/mit-b0 --dataset_name segments/sidewalk-semantic --output_dir ./segformer_outputs/ --remove_unused_columns False --do_train --evaluation_strategy steps --push_to_hub --push_to_hub_model_id segformer-finetuned-sidewalk-10k-steps --max_steps 10000 --learning_rate 0.00006 --lr_scheduler_type polynomial --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --logging_strategy steps --logging_steps 100 --evaluation_strategy epoch --save_strategy epoch --seed 1337 ``` for full float16 ```bash python run_semantic_segmentation.py --model_name_or_path nvidia/mit-b0 --dataset_name segments/sidewalk-semantic --output_dir ./segformer_outputs/ --remove_unused_columns False --do_train --evaluation_strategy steps --push_to_hub --push_to_hub_model_id segformer-finetuned-sidewalk-10k-steps --max_steps 10000 --learning_rate 0.00006 --lr_scheduler_type polynomial --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --logging_strategy steps --logging_steps 100 --evaluation_strategy epoch --save_strategy epoch --seed 1337 --fp16_full_eval ``` For both I am getting, `~6.9 GB` gpu memory and `~6it/s` I don't think that should be case. For `fp16_full_eval`, there should be some speedup. System info: ``` transformers version: 4.31.0.dev0 python: 3.8.16 GPU: RTX4090 ``` ### Who can help? @sgugger @amyeroberts ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction for float16 full run ```bash python run_semantic_segmentation.py --model_name_or_path nvidia/mit-b0 --dataset_name segments/sidewalk-semantic --output_dir ./segformer_outputs/ --remove_unused_columns False --do_train --evaluation_strategy steps --push_to_hub --push_to_hub_model_id segformer-finetuned-sidewalk-10k-steps --max_steps 10000 --learning_rate 0.00006 --lr_scheduler_type polynomial --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --logging_strategy steps --logging_steps 100 --evaluation_strategy epoch --save_strategy epoch --seed 1337 --fp16_full_eval ``` ### Expected behavior I would have expected some speed up in terms of more iterations/s for `fp16_full_eval`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24283/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24283/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24282
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24282/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24282/comments
https://api.github.com/repos/huggingface/transformers/issues/24282/events
https://github.com/huggingface/transformers/pull/24282
1,757,311,897
PR_kwDOCUB6oc5TBB8i
24,282
Big TF test cleanup
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "(@Rocketknight1 ping me if the gen tests are not sorted after the latest push)", "I think everything has been addressed now, but I'm not going to merge this one today because there's another PR affecting our tests (#24301) and ideally I'd like to be able to separately view their impact on the CI!", "> I think everything has been addressed now, but I'm not going to merge this one today\r\n\r\nNice 👍 .\r\n\r\nI never merge PRs on Firday evening or early afternoon. I don't want to get a ☎️ ⚡ !", "Wait, you merged ...!? (but you said you are not going to merge 🤔 )" ]
1,686
1,686
1,686
MEMBER
null
Now we've done a big overhaul of the TF model internals, a lot of tests can be fixed. Several tests were disabled for being buggy or too slow - these are almost all performant now, so I re-enabled them. Runtime for the re-enabled tests was 15-20 seconds on my local machine. Also, we had a number of TF test failures in the daily CI. I think this PR should fix all of them, except for two cases: Firstly, some models have issues with `resize_token_embeddings`. These failures are caused by the transition to `TFSharedEmbedding` that @gante is currently working on, and I didn't want to interfere! The usual cause is that `resize_token_embeddings` replaces the new-style `TFSharedEmbedding` with an old `tf.Variable`. Secondly, there are a couple of failures in generate tests. I'm also leaving this to @gante because he knows much more about that code than me :sweat_smile:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24282/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24282/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24282", "html_url": "https://github.com/huggingface/transformers/pull/24282", "diff_url": "https://github.com/huggingface/transformers/pull/24282.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24282.patch", "merged_at": 1686926449000 }
https://api.github.com/repos/huggingface/transformers/issues/24281
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24281/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24281/comments
https://api.github.com/repos/huggingface/transformers/issues/24281/events
https://github.com/huggingface/transformers/pull/24281
1,757,215,389
PR_kwDOCUB6oc5TAs9S
24,281
Add MMS CTC Fine-Tuning
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> I think this should go in its own example instead of adding some more code to the (already complex) ctc example. It's preferable to have multiple examples focused on one thing than one big multi-purpose example.\r\n\r\nOk for me", "Added a test. Moved the code into a new example file. Added an extensive README. WER for a quick 10min run can be as low as 23% WER! ", "Demo training run: https://huggingface.co/patrickvonplaten/wav2vec2-common_voice-tr-mms-demo", "In which release will this be available in?", "You can find the examples scripts here: https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition#connectionist-temporal-classification-with-adapters\r\n\r\nThey assume that you are running from the latest dev version:\r\nhttps://github.com/huggingface/transformers/blob/f10452271802573fe6e19442631113c4c23a2c70/examples/pytorch/speech-recognition/run_speech_recognition_ctc_adapter.py#L55-L56\r\n\r\nWhich you can do by following the instructions for installing from source or editable install here: https://huggingface.co/docs/transformers/installation#install-from-source\r\n\r\nAlthough for MMS ASR fine-tuning, you can safely run the script using the latest PyPi release version (4.31.0)." ]
1,686
1,690
1,686
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds language adapter fine-tuning for MMS. Still playing around with good hyper-parameters but script is functional. Getting some very nice results now for: ```py export CUDA_VISIBLE_DEVICES="0" LEARNING_RATE="1e-3" python run_speech_recognition_ctc.py \ --dataset_name="common_voice" \ --model_name_or_path="facebook/mms-1b-all" \ --dataset_config_name="tr" \ --output_dir="./wav2vec2-common_voice-tr-mms-demo" \ --overwrite_output_dir \ --num_train_epochs="15" \ --per_device_train_batch_size="32" \ --learning_rate="${LEARNING_RATE}" \ --warmup_steps="400" \ --evaluation_strategy="steps" \ --text_column_name="sentence" \ --length_column_name="input_length" \ --save_steps="400" \ --eval_steps="200" \ --layerdrop="0.0" \ --save_total_limit="3" \ --adapter_attn_dim="16" \ --adapter_language="tur" \ --gradient_checkpointing \ --chars_to_ignore , ? . ! - \; \: \" “ % ‘ ” � \ --fp16 \ --group_by_length \ --do_train --do_eval ``` WER drops to 25% just after 200 steps. See: https://wandb.ai/patrickvonplaten/huggingface/runs/6f5cx5gg?workspace=user-patrickvonplaten @sgugger @amyeroberts @sanchit-gandhi it'd be super nice to get a quick review here whether the code changes are generally fine with you. I'll only have to fill out the TODOs in the README with a nice example code and some description. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24281/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24281/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24281", "html_url": "https://github.com/huggingface/transformers/pull/24281", "diff_url": "https://github.com/huggingface/transformers/pull/24281.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24281.patch", "merged_at": 1686784227000 }
https://api.github.com/repos/huggingface/transformers/issues/24280
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24280/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24280/comments
https://api.github.com/repos/huggingface/transformers/issues/24280/events
https://github.com/huggingface/transformers/issues/24280
1,757,048,966
I_kwDOCUB6oc5ounCG
24,280
Loading fp16 model as fp32 when using .from_retrained()
{ "login": "EdanZizo", "id": 128597952, "node_id": "U_kgDOB6o_wA", "avatar_url": "https://avatars.githubusercontent.com/u/128597952?v=4", "gravatar_id": "", "url": "https://api.github.com/users/EdanZizo", "html_url": "https://github.com/EdanZizo", "followers_url": "https://api.github.com/users/EdanZizo/followers", "following_url": "https://api.github.com/users/EdanZizo/following{/other_user}", "gists_url": "https://api.github.com/users/EdanZizo/gists{/gist_id}", "starred_url": "https://api.github.com/users/EdanZizo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/EdanZizo/subscriptions", "organizations_url": "https://api.github.com/users/EdanZizo/orgs", "repos_url": "https://api.github.com/users/EdanZizo/repos", "events_url": "https://api.github.com/users/EdanZizo/events{/privacy}", "received_events_url": "https://api.github.com/users/EdanZizo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @younesbelkada ", "Hi @EdanZizo \r\nto load the model with the desired dtype directly from the config I believe you should use `torch_dtype=\"auto\"` in `GPTJForCausalLM.from_pretrained(\"ThisIsMyUsername69/gpt-j-6B-16bit\", config=config)`. But note that the canonical way to load any model in half precision is: \r\n```python\r\nGPTJForCausalLM.from_pretrained(\"ThisIsMyUsername69/gpt-j-6B-16bit\", torch_dtype=torch.float16)\r\n```", "It is working now, thanks for your help." ]
1,686
1,686
1,686
NONE
null
### System Info When loading GPTJ from GPTJForCausalLM.from_pretrained() loading 16bit model which should be approximately 12GB, it instead has a model size ~23GB which is the full 32 bit weights. the code: ``` device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') tokenizer = AutoTokenizer.from_pretrained("ThisIsMyUsername69/gpt-j-6B-16bit") config = AutoConfig.from_pretrained("ThisIsMyUsername69/gpt-j-6B-16bit", torch_dtype=torch.float16) model = GPTJForCausalLM.from_pretrained("ThisIsMyUsername69/gpt-j-6B-16bit", config=config) ``` I've tried multiple ways of trying to load in 16 bit, from_config, with or without autoconfig, regardless of everything it seems to always use 23GB of VRAM except with EleutherAI/gpt-j-6B using revision float16. The model has memory footprint of 23194MiB. ="ThisIsMyUsername69/gpt-j-6B-16bit" and "nlpcloud/instruct-gpt-j-fp16" both give the larger model size. I have tried giving the parameter ' revision="float16" ' and it gives the same response. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. run the code above 2. observe model size ### Expected behavior The model size should be approximately 11GB however it is giving the full model weight (32 bit float) size.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24280/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24280/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24279
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24279/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24279/comments
https://api.github.com/repos/huggingface/transformers/issues/24279/events
https://github.com/huggingface/transformers/pull/24279
1,757,029,928
PR_kwDOCUB6oc5TAFIi
24,279
Clean up old Accelerate checks
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
COLLABORATOR
null
# What does this PR do? Since we now enforce at init that the user has the version of Accelerate pinned at minimum, we can remove a lot of boilerplate code checking if things are available or not.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24279/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24279/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24279", "html_url": "https://github.com/huggingface/transformers/pull/24279", "diff_url": "https://github.com/huggingface/transformers/pull/24279.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24279.patch", "merged_at": 1686761050000 }
https://api.github.com/repos/huggingface/transformers/issues/24278
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24278/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24278/comments
https://api.github.com/repos/huggingface/transformers/issues/24278/events
https://github.com/huggingface/transformers/issues/24278
1,756,995,985
I_kwDOCUB6oc5ouaGR
24,278
Allowing one to pass run_config to hyperparameter tuning (to allow storing checkpoints on s3)
{ "login": "hugocool", "id": 25592581, "node_id": "MDQ6VXNlcjI1NTkyNTgx", "avatar_url": "https://avatars.githubusercontent.com/u/25592581?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hugocool", "html_url": "https://github.com/hugocool", "followers_url": "https://api.github.com/users/hugocool/followers", "following_url": "https://api.github.com/users/hugocool/following{/other_user}", "gists_url": "https://api.github.com/users/hugocool/gists{/gist_id}", "starred_url": "https://api.github.com/users/hugocool/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hugocool/subscriptions", "organizations_url": "https://api.github.com/users/hugocool/orgs", "repos_url": "https://api.github.com/users/hugocool/repos", "events_url": "https://api.github.com/users/hugocool/events{/privacy}", "received_events_url": "https://api.github.com/users/hugocool/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @hugocool, \r\n\r\nThanks for raising this issue. The integrations are maintained by their authors and not us. You can definitely open a PR, just make sure to ping them to verify the change! \r\n ", "`transformers.integrations.run_hp_search_ray` calls `ray.tune.run` which doesn't accept `run_config` as that is part of the newer `Tuner` API. According to https://discuss.ray.io/t/tune-run-vs-tuner-fit/7041/3 `Tuner` is supposed to replace `run` in the long term, although for now it's still beta. `transformers` should probably migrate to `Tuner` at some point, I don't know if/when that'll be a good idea.\r\n\r\nIn the meantime, `ray.tune.run` directly accepts `callbacks` and `storage_path` arguments, so I think passing them to `hyperparameter_search` without wrapping in a `RunConfig` should just work?", "That’s correct, somewhat hidden in ray.tune one finds that **kwargs are passed, unfortunately it is not documented, nor what settings to use in order to prevent local accumulation of checkpoints to the point your disk fills (which is governed by the `check_point_freq`).\r\n\r\nSo maybe on update to the documentation is in order?\r\nI could provide an example of how to run HF tuning on AWS batch for example?\r\n\r\n\r\nOn 20 Jun 2023 at 18:14 +0200, Alex Hall ***@***.***>, wrote:\r\n> transformers.integrations.run_hp_search_ray calls ray.tune.run which doesn't accept run_config as that is part of the newer Tuner API. According to https://discuss.ray.io/t/tune-run-vs-tuner-fit/7041/3 Tuner is supposed to replace run in the long term, although for now it's still beta. transformers should probably migrate to Tuner at some point, I don't know if/when that'll be a good idea.\r\n> In the meantime, ray.tune.run directly accepts callbacks and storage_path arguments, so I think passing them to hyperparameter_search without wrapping in a RunConfig should just work?\r\n> —\r\n> Reply to this email directly, view it on GitHub, or unsubscribe.\r\n> You are receiving this because you were mentioned.Message ID: ***@***.***>\r\n", "> somewhat hidden in ray.tune one finds that **kwargs are passed\r\n\r\nDo you mean hidden in `transformers.hyperparameter_search`/`run_hp_search_ray`?\r\n\r\n> I could provide an example of how to run HF tuning on AWS batch for example?\r\n\r\nI don't know who you're offering this to. I'm not using this myself, I'm looking for ways to contribute to this repo in general.", "> Do you mean hidden in transformers.hyperparameter_search/run_hp_search_ray?\r\nYes, there and within `ray/tune/tune.py`; `run`\r\n> > I could provide an example of how to run HF tuning on AWS batch for example?\r\n> > I don't know who you're offering this to. I'm not using this myself, I'm looking for ways to contribute to this repo in general.\r\nI’m sorry for being a little vague, basically the documentation is lacking, I had to dig through GitHub issues, forum posts and source code to figure out how to do this.\r\nThe API that hugging face exposes is deceptively simple, it seems on the surface like it will just work while in reality this is not the case.\r\nThis is not helped by the Enum choice connection to the where you provide backend=‘ray’, and now type checking doesn’t help you..\r\nSo, more documentation could definitely help people I think.\r\n\r\nMaybe I could help provide more documentation? Based on the examples I worked out?\r\nIdk, id hope that would help others!\r\n\r\n\r\n___\r\n\r\nHugo Evers\r\nOn 20 Jun 2023 at 18:42 +0200, Alex Hall ***@***.***>, wrote:\r\n> > somewhat hidden in ray.tune one finds that **kwargs are passed\r\n> Do you mean hidden in transformers.hyperparameter_search/run_hp_search_ray?\r\n> > I could provide an example of how to run HF tuning on AWS batch for example?\r\n> I don't know who you're offering this to. I'm not using this myself, I'm looking for ways to contribute to this repo in general.\r\n> —\r\n> Reply to this email directly, view it on GitHub, or unsubscribe.\r\n> You are receiving this because you were mentioned.Message ID: ***@***.***>\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,686
1,690
1,690
NONE
null
### Feature request [in ray 2.5.1 it becomes possible to set the intermediate storage of checkpoints to s3](https://docs.ray.io/en/latest/tune/tutorials/tune-storage.html#configuring-tune-with-cloud-storage-aws-s3-google-cloud-storage). ``` from ray.air.config import RunConfig from ray.air.integrations.mlflow import MLflowLoggerCallback run_config = RunConfig( storage_path="s3://.....", callbacks=[MLflowLoggerCallback], ) ``` However in the huggingface integration it is not possible to pass this kwarg to the tuner, like so: ``` best_run = trainer.hyperparameter_search( direction="maximize", backend="ray", run_config=run_config, ) ``` ### Motivation to tune a transformer without clogging local storage ### Your contribution i could submit a proposal
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24278/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24278/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24277
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24277/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24277/comments
https://api.github.com/repos/huggingface/transformers/issues/24277/events
https://github.com/huggingface/transformers/pull/24277
1,756,923,530
PR_kwDOCUB6oc5S_tz1
24,277
Update check of core deps
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24277). All of your documentation changes will be reflected on that endpoint." ]
1,686
1,686
1,686
COLLABORATOR
null
# What does this PR do? This PR updates the check of core dependencies to: - include all of them (huggingface_hub and safetensors were not tested) - add Accelerate since we have a lot of issues with versions mismatch, with the work done on the Trainer cc @ydshieh one of the lines removed here concerns the Python 3.6 but you should include the equivalent for your PR to drop Python 3.7
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24277/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24277/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24277", "html_url": "https://github.com/huggingface/transformers/pull/24277", "diff_url": "https://github.com/huggingface/transformers/pull/24277.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24277.patch", "merged_at": 1686751592000 }
https://api.github.com/repos/huggingface/transformers/issues/24276
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24276/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24276/comments
https://api.github.com/repos/huggingface/transformers/issues/24276/events
https://github.com/huggingface/transformers/issues/24276
1,756,889,080
I_kwDOCUB6oc5ot__4
24,276
[TokenizerSlow] `replace_additional_special_tokens` is not doing much
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "id": 1834056635, "node_id": "MDU6TGFiZWwxODM0MDU2NjM1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization", "name": "Core: Tokenization", "color": "FF4446", "default": false, "description": "Internals of the library; Tokenization." } ]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "cc @ydshieh since you added the feature", "~~I don't *fully* understand what the code snippet above try to demonstrate.~~\r\n\r\nBut the origin of `self._additional_special_tokens` is from this issue #20418, where `added_tokens_encoder` will include all the added tokens, but `additional_special_tokens` is being replaced, which is really confusing behavior.\r\n\r\nIf you look the description in #20418, your code snippet does its job (although yes confusing).\r\n\r\nThe `replace_additional_special_tokens` with its default value `True` is just to make the behavior not **too** surprising, but keep the backward compatibility.\r\n\r\n", "> It was confusingly for me that the added tokens encoder is not updated.\r\n\r\nyeah I know, but that's what it has been for years. (and I agree that the name of this introduced argument itself might be confusing too.) \r\n\r\n> That’s because maybe we should have a separate function just to say that’s we don’t want this token to be special anymore\r\n\r\nIf you have good idea to address the issue #20418 while reducing the (naming) confusion added in #20424, go ahead :-)\r\n\r\n(sorry, I accidentally modified your message 😭 )", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Closing as this is deprecated and changing the list of additional special tokens is a lot more involved than this" ]
1,686
1,697
1,697
COLLABORATOR
null
Just flagging this as the `add_special_tokens` method got pretty complicated, adding a kwargs, `replace_additional_special_tokens`, that supposedly can prevent replacing the `self._additional_special_tokens` attribute. For any tokenizer, this will remove it from the list, but will not update the internal `trie` and thus has no effect at all: ```python >>> from transformers import XLMRobertaTokenizer >>> tokenizer_a = XLMRobertaTokenizer.from_pretrained('xlm-roberta-base') >>> tokenizer_a.add_special_tokens({"additional_special_tokens":["<//s>"]}) >>> tokenizer_a.additional_special_tokens ['<//s>'] >>> print(tokenizer_a.tokenize("This is a <//s>")) ['▁This', '▁is', '▁a', '<//s>'] >>> tokenizer_a.add_special_tokens({"additional_special_tokens":["<///s>"]}, replace_additional_special_tokens= True) >>> print(tokenizer_a.tokenize("This is a <//s>")) ['▁This', '▁is', '▁a', '<//s>'] ``` This will be addressed in #23909
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24276/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24276/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24275
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24275/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24275/comments
https://api.github.com/repos/huggingface/transformers/issues/24275/events
https://github.com/huggingface/transformers/issues/24275
1,756,871,301
I_kwDOCUB6oc5ot7qF
24,275
Can we convert dynamic DNN model to TorchScript?
{ "login": "ranggihwang", "id": 50730045, "node_id": "MDQ6VXNlcjUwNzMwMDQ1", "avatar_url": "https://avatars.githubusercontent.com/u/50730045?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ranggihwang", "html_url": "https://github.com/ranggihwang", "followers_url": "https://api.github.com/users/ranggihwang/followers", "following_url": "https://api.github.com/users/ranggihwang/following{/other_user}", "gists_url": "https://api.github.com/users/ranggihwang/gists{/gist_id}", "starred_url": "https://api.github.com/users/ranggihwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ranggihwang/subscriptions", "organizations_url": "https://api.github.com/users/ranggihwang/orgs", "repos_url": "https://api.github.com/users/ranggihwang/repos", "events_url": "https://api.github.com/users/ranggihwang/events{/privacy}", "received_events_url": "https://api.github.com/users/ranggihwang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Thanks for reporting, I think it might also be because our Switch returns None values sometimes, checking\r\n! ", "Thanks a lot for providing the full reproduction script 🤗 \r\nActually, I don't think that MoE models can be torch scripted as the path taken by the inputs will be different for each tokens (because of the routing mechanism). \r\nHowever, there was a problem of returning `None` values, and not returning the router probs if labels were `None`. Fixing in #24300", "Thanks, so, do you mean that the MoE model cannot be torchscripted as it has a dynamic workflow depending on its input, not because returning `None` value?\r\n\r\nAnd for the `None` value issue, which one of these is the newest correct version? They are slightly different.\r\nhttps://github.com/huggingface/transformers/pull/24300/files/3b899c180d4a38a06d34dcb1687626594f0497a0\r\nhttps://github.com/huggingface/transformers/commit/ba3fb4b8d72b9202423cda01896349a883480d2e#diff-897fe3777ef1c9d71d6268fac217b0e163f2e20a3a5e4fabfe5a3675bc9202c7", "The return value was indeed an issue, which prevent starting the tracing. But now that `None` are not returned anymore, the model still cannot be traced because of the dynamic workflow yes. \r\nThe correct commit is the one that was merged to main! ", "Thank a lot!\r\nBut then can I just use scripting for torchscript?\r\nAs far as I know, there're two optimization schemes for torchscript, tracing and scripting.\r\nSo can I just adopt only scripting selectively?", "Yep! I think scripting is what you should be using for dynamic workflows! ", "Well, I found something interesting.\r\n\r\nAs shown below, scripting for the T5 model also does not work.\r\nBut as shown above, tracing worked.\r\n\r\nHow does this happen?\r\n\r\n```python\r\nfrom transformers import T5Tokenizer, T5ForConditionalGeneration\r\nimport torch\r\n\r\ntokenizer = T5Tokenizer.from_pretrained('t5-small')\r\nmodel = T5ForConditionalGeneration.from_pretrained('t5-small', torchscript = True)\r\ninput_ids = tokenizer('The <extra_id_0> walks in <extra_id_1> park', return_tensors='pt').input_ids\r\nattention_mask = input_ids.ne(model.config.pad_token_id).long()\r\ndecoder_input_ids = tokenizer('<pad> <extra_id_0> cute dog <extra_id_1> the <extra_id_2>', return_tensors='pt').input_ids\r\n\r\n# traced_model = torch.jit.trace(model, (input_ids, attention_mask, decoder_input_ids))\r\nscripted_model = torch.jit.script(model)\r\n# torch.jit.save(traced_model, \"traced_t5.pt\")\r\n```" ]
1,686
1,687
1,687
NONE
null
### System Info - `transformers` version: 4.26.1 - Platform: Linux-5.4.0-42-generic-x86_64-with-glibc2.10 - Python version: 3.8.13 - Huggingface_hub version: 0.13.4 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker @sgugger @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Hi all. I'm trying to convert SwitchTransformer model to TorchScript. (SwitchTransformer model is MoE DNN based on Google T5 model.) When converting both T5 and SwitchTransforemer, there's no error for T5 but I got following error for SwitchTransformer. ``` /root/HuggingFace/.HF/lib/python3.8/site-packages/transformers/modeling_utils.py:776: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if causal_mask.shape[1] < attention_mask.shape[1]: Traceback (most recent call last): File "example.py", line 423, in <module> traced_model = torch.jit.trace(model, (input_ids, attention_mask, decoder_input_ids)) File "/root/HuggingFace/.HF/lib/python3.8/site-packages/torch/jit/_trace.py", line 794, in trace return trace_module( File "/root/HuggingFace/.HF/lib/python3.8/site-packages/torch/jit/_trace.py", line 1056, in trace_module module._c._create_method_from_trace( RuntimeError: Only tensors, lists, tuples of tensors, or dictionary of tensors can be output from traced functions ``` I think it is because of the dynamic characteristics of SwitchTransformer. This is the code for T5. ```python from transformers import T5Tokenizer, T5ForConditionalGeneration import torch tokenizer = T5Tokenizer.from_pretrained('t5-small') model = T5ForConditionalGeneration.from_pretrained('t5-small', torchscript = True) input_ids = tokenizer('The <extra_id_0> walks in <extra_id_1> park', return_tensors='pt').input_ids attention_mask = input_ids.ne(model.config.pad_token_id).long() decoder_input_ids = tokenizer('<pad> <extra_id_0> cute dog <extra_id_1> the <extra_id_2>', return_tensors='pt').input_ids traced_model = torch.jit.trace(model, (input_ids, attention_mask, decoder_input_ids)) torch.jit.save(traced_model, "traced_t5.pt") ``` And this is the code for SwitchTransformer. ```python from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration from transformers import AutoTokenizer, SwitchTransformersConfig import torch # Tokenizer tokenizer = AutoTokenizer.from_pretrained( "google/switch-base-8", resume_download=True) model = SwitchTransformersForConditionalGeneration.from_pretrained( "google/switch-base-8", resume_download=True, torch_dtype=torch.bfloat16, torchscript=True, ) input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>." output_text = "<pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>" input_ids = tokenizer(input_text, return_tensors="pt").input_ids decoder_input_ids = tokenizer(output_text, return_tensors="pt", padding=True).input_ids attention_mask = input_ids.ne(model.config.pad_token_id).long() # model.eval() traced_model = torch.jit.trace(model, (input_ids, attention_mask, decoder_input_ids)) ``` ### Expected behavior TorchScript version of SwitchTransformer.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24275/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24275/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24274
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24274/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24274/comments
https://api.github.com/repos/huggingface/transformers/issues/24274/events
https://github.com/huggingface/transformers/pull/24274
1,756,869,606
PR_kwDOCUB6oc5S_iBH
24,274
Fix resuming PeftModel checkpoints in Trainer
{ "login": "llohann-speranca", "id": 105556006, "node_id": "U_kgDOBkqoJg", "avatar_url": "https://avatars.githubusercontent.com/u/105556006?v=4", "gravatar_id": "", "url": "https://api.github.com/users/llohann-speranca", "html_url": "https://github.com/llohann-speranca", "followers_url": "https://api.github.com/users/llohann-speranca/followers", "following_url": "https://api.github.com/users/llohann-speranca/following{/other_user}", "gists_url": "https://api.github.com/users/llohann-speranca/gists{/gist_id}", "starred_url": "https://api.github.com/users/llohann-speranca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/llohann-speranca/subscriptions", "organizations_url": "https://api.github.com/users/llohann-speranca/orgs", "repos_url": "https://api.github.com/users/llohann-speranca/repos", "events_url": "https://api.github.com/users/llohann-speranca/events{/privacy}", "received_events_url": "https://api.github.com/users/llohann-speranca/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Although this continues training but it doesnt retain the old stuff for me. Can someone look into this?\r\n", "This doesn't seem like older resume from checkpoint that has it for pytorch models. Inside trainer.train we need to pass resume from checkpoint parameters as our last checkpoint path. while passing the path, it shows as can't find a valid checkpoint. Could someone please post some code snippet on how to use resume from checkpoint for PEFT models?", "@pacman100 Requesting to review and merge the changes, thanks!", "Hi @techthiyanes \r\n\r\n> This doesn't seem like older resume from checkpoint that has it for pytorch models. Inside trainer.train we need to pass resume from checkpoint parameters as our last checkpoint path. while passing the path, it shows as can't find a valid checkpoint. Could someone please post some code snippet on how to use resume from checkpoint for PEFT models?\r\n\r\nAs shared in the snippet above, to make `resume_from_checkpoint` work as expected, it assumes that you have previously trained your model using trainer that saves artifacts under `{output_dir}/checkpoint-{i}`, I have \"faked\" that in the example by manually saving a model in a folder called `{output_dir}/checkpoint-1`. Therefore you need to make sure the model weights lives under that folder.", "> This doesn't seem like older resume from checkpoint that has it for pytorch models. Inside trainer.train we need to pass resume from checkpoint parameters as our last checkpoint path. while passing the path, it shows as can't find a valid checkpoint. Could someone please post some code snippet on how to use resume from checkpoint for PEFT models?\r\n\r\nHi @younesbelkada can you look at my issue with the code and please address it?\r\n", "Yes @adityaaryan77 , sure, please have a look at my comment on the PEFT issue and discuss there", "Hi @younesbelkada\r\n\r\n> ```python\r\n> ```python\r\n> trainer.train(resume_from_checkpoint=True)\r\n> ```\r\n> \r\n> \r\n> \r\n> \r\n> \r\n> \r\n> \r\n> \r\n> \r\n> \r\n> \r\n> ```\r\n\r\nThank you for your response.\r\n\r\nPlease look at below code snippet:\r\n\r\n# -*- coding: utf-8 -*-\r\n\"\"\"Untitled345.ipynb\r\n\r\nAutomatically generated by Colaboratory.\r\n\r\nOriginal file is located at\r\n https://colab.research.google.com/drive/1SgzMXUUDK1wDH0M0yQPfWmeNAKyy7EFs\r\n\"\"\"\r\n\r\n! pip install datasets transformers peft evaluate\r\n\r\n!git clone https://github.com/llohann-speranca/transformers.git -b fix-resume-checkpoint-for-peftmodel\r\n\r\n!cp -r /content/transformers /usr/local/lib/python3.10/dist-packages/transformers\r\n\r\nimport transformers\r\nimport numpy as np\r\nGLUE_TASKS = [\"cola\", \"mnli\", \"mnli-mm\", \"mrpc\", \"qnli\", \"qqp\", \"rte\", \"sst2\", \"stsb\", \"wnli\"]\r\ntask = \"cola\"\r\nmodel_checkpoint = \"bert-large-uncased\"\r\nbatch_size = 16\r\nfrom datasets import load_dataset, load_metric\r\nactual_task = \"mnli\" if task == \"mnli-mm\" else task\r\ndataset = load_dataset(\"glue\", actual_task)\r\nmetric = load_metric('glue', actual_task)\r\nfrom transformers import AutoTokenizer\r\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True)\r\ntask_to_keys = {\r\n \"cola\": (\"sentence\", None),\r\n \"mnli\": (\"premise\", \"hypothesis\"),\r\n \"mnli-mm\": (\"premise\", \"hypothesis\"),\r\n \"mrpc\": (\"sentence1\", \"sentence2\"),\r\n \"qnli\": (\"question\", \"sentence\"),\r\n \"qqp\": (\"question1\", \"question2\"),\r\n \"rte\": (\"sentence1\", \"sentence2\"),\r\n \"sst2\": (\"sentence\", None),\r\n \"stsb\": (\"sentence1\", \"sentence2\"),\r\n \"wnli\": (\"sentence1\", \"sentence2\"),\r\n}\r\nsentence1_key, sentence2_key = task_to_keys[task]\r\nif sentence2_key is None:\r\n print(f\"Sentence: {dataset['train'][0][sentence1_key]}\")\r\nelse:\r\n print(f\"Sentence 1: {dataset['train'][0][sentence1_key]}\")\r\n print(f\"Sentence 2: {dataset['train'][0][sentence2_key]}\")\r\ndef preprocess_function(examples):\r\n if sentence2_key is None:\r\n return tokenizer(examples[sentence1_key], truncation=True)\r\n return tokenizer(examples[sentence1_key], examples[sentence2_key], truncation=True)\r\nencoded_dataset = dataset.map(preprocess_function, batched=True)\r\nfrom transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer\r\nfrom peft import (\r\n get_peft_config,\r\n get_peft_model,\r\n get_peft_model_state_dict,\r\n set_peft_model_state_dict,\r\n LoraConfig,\r\n PeftType,\r\n PrefixTuningConfig,\r\n PromptEncoderConfig,\r\n)\r\npeft_type = PeftType.LORA\r\ndevice = \"cuda\"\r\npeft_config = LoraConfig(task_type=\"SEQ_CLS\", inference_mode=False, r=8, lora_alpha=16, lora_dropout=0.1)\r\nlr = 3e-4\r\n\r\nnum_labels = 3 if task.startswith(\"mnli\") else 1 if task==\"stsb\" else 2\r\nmodel = AutoModelForSequenceClassification.from_pretrained(model_checkpoint, num_labels=num_labels)\r\nmodel = get_peft_model(model, peft_config)\r\nmodel.print_trainable_parameters()\r\nmodel\r\nmetric_name = \"pearson\" if task == \"stsb\" else \"matthews_correlation\" if task == \"cola\" else \"accuracy\"\r\nmodel_name = model_checkpoint.split(\"/\")[-1]\r\n\r\nargs = TrainingArguments(\r\n f\"{model_name}-finetuned1-{task}\",\r\n evaluation_strategy = \"epoch\",\r\n save_strategy = \"epoch\",\r\n learning_rate=2e-5,\r\n per_device_train_batch_size=batch_size,\r\n per_device_eval_batch_size=batch_size,\r\n num_train_epochs=2,\r\n weight_decay=0.01,\r\n # load_best_model_at_end=True,\r\n metric_for_best_model=metric_name,\r\n # push_to_hub=True,\r\n)\r\nfrom transformers import Seq2SeqTrainer, TrainerCallback, TrainingArguments, TrainerState, TrainerControl\r\nfrom transformers.trainer_utils import PREFIX_CHECKPOINT_DIR\r\nimport os\r\n\r\nclass SavePeftModelCallback(TrainerCallback):\r\n def on_save(\r\n self,\r\n args: TrainingArguments,\r\n state: TrainerState,\r\n control: TrainerControl,\r\n **kwargs,\r\n ):\r\n checkpoint_folder = os.path.join(args.output_dir, f\"{PREFIX_CHECKPOINT_DIR}-{state.global_step}\")\r\n\r\n peft_model_path = os.path.join(checkpoint_folder, \"adapter_model\")\r\n kwargs[\"model\"].save_pretrained(peft_model_path)\r\n\r\n pytorch_model_path = os.path.join(checkpoint_folder, \"pytorch_model.bin\")\r\n if os.path.exists(pytorch_model_path):\r\n os.remove(pytorch_model_path)\r\n return control\r\ndef compute_metrics(eval_pred):\r\n predictions, labels = eval_pred\r\n if task != \"stsb\":\r\n predictions = np.argmax(predictions, axis=1)\r\n else:\r\n predictions = predictions[:, 0]\r\n return metric.compute(predictions=predictions, references=labels)\r\nvalidation_key = \"validation_mismatched\" if task == \"mnli-mm\" else \"validation_matched\" if task == \"mnli\" else \"validation\"\r\ntrainer = Trainer(\r\n model,\r\n args,\r\n train_dataset=encoded_dataset[\"train\"],\r\n eval_dataset=encoded_dataset[validation_key],\r\n tokenizer=tokenizer,\r\n compute_metrics=compute_metrics,\r\n callbacks=[SavePeftModelCallback],\r\n)\r\ntrainer.train()\r\n\r\ntrainer.save_model(\"/content/bert-large-uncased-finetuned1-cola/checkpoint-1\")\r\n\r\ntrainer.train(resume_from_checkpoint='/content/bert-large-uncased-finetuned1-cola/checkpoint-1070/adapter_model')\r\n\r\n\r\n\r\nInside the resume from checkpoint i have tried with below options\r\n1) resume_from_checkpoint = True\r\n2) resume_from_checkpoint = (Last checkpoint path)\r\n3) resume_from_checkpoint = (trainer.saved model path)\r\n\r\nEverywhere I'm getting the same message of Can't find a valid checkpoint at <Model saved path>.\r\nAt the same time, I'm able to continue my resume from checkpoint in native pytorch code.\r\n", "> Hi @llohann-speranca Again thanks for your great work on this, I think this seems a rather important fix that might unlock a lot of users, if that's ok for you, I can quickly take over the PR and address the last comment so that we can merge the PR. What do you think ?\r\n\r\nHi @younesbelkada. Sure! I have been very busy and have still to learn how to deal with PRs. Sorry about that.", "@llohann-speranca thanks! \r\n@techthiyanes it seems you are using the API the wrong way. `resume_from_checkpoint` will try to retrieve the latest checkpoint from the output directory of the trainer. Therefore make sure you have correct `checkpoints-{i}` folders inside `f\"{model_name}-finetuned1-{task}\"` in your case and use `resume_from_checkpoint=True`", "> @llohann-speranca thanks! @techthiyanes it seems you are using the API the wrong way. `resume_from_checkpoint` will try to retrieve the latest checkpoint from the output directory of the trainer. Therefore make sure you have correct `checkpoints-{i}` folders inside `f\"{model_name}-finetuned1-{task}\"` in your case and use `resume_from_checkpoint=True`\r\n\r\nThanks a lot on your response.\r\nBy default while passing resume from checkpoint then API automatically consumes the recent checkpoint. But this is something not working as expected for PEFT models than torch models. As mentioned, I have pointed out the correct checkpoint and the same folder resides inside alone.\r\n\r\nIf you don't mind, could you please try executing the any of huggingface example code inserting PEFT with the trainer & resume from checkpoint? Then you might be able to replicate. ", "@techthiyanes I think it works as expected with this PR, as explained in https://github.com/huggingface/transformers/pull/24274#pullrequestreview-1479614483 I have tried the attached snippet that was not working before the PR as mentioned and this PR properly fixes it by loading the checkpoint. If you want you can try to replicate using a smaller example (for example imdb as attached) and let me know if you still face an issue by opening a new ticket", "> @techthiyanes I think it works as expected with this PR, as explained in [#24274 (review)](https://github.com/huggingface/transformers/pull/24274#pullrequestreview-1479614483) I have tried the attached snippet that was not working before the PR as mentioned and this PR properly fixes it by loading the checkpoint. If you want you can try to replicate using a smaller example (for example imdb as attached) and let me know if you still face an issue by opening a new ticket\r\n\r\nSure..Thanks a lot.. Let me try above snippet for classification models then let you know.", "> @techthiyanes I think it works as expected with this PR, as explained in [#24274 (review)](https://github.com/huggingface/transformers/pull/24274#pullrequestreview-1479614483) I have tried the attached snippet that was not working before the PR as mentioned and this PR properly fixes it by loading the checkpoint. If you want you can try to replicate using a smaller example (for example imdb as attached) and let me know if you still face an issue by opening a new ticket\r\n\r\nStill able to replicate the issue. Raised a separate issue on the same.\r\nhttps://github.com/huggingface/transformers/issues/24354", "Hi, nice job! Does this new feature available if I `pip install` the latest peft/transformers packages? Or should I install from source?", "Thank you so much @llohann-speranca and @younesbelkada for adding this 🤗! @beyondguo, please install from source as this isn't yet part of the release.", "Thanks for iterating! Will we perform inference in the same manner? Specifically, `peft` requires us to load `PeftConfig` via `adapter_config.json`. I saw that this is not saved with `trainer.save_model()`. Will we need to add `model.save_pretrained()` to use for inference?", "I am trying to resume in this way:\r\n\r\n```python\r\nfrom peft import PeftModel, PeftConfig\r\nfrom transformers import AutoModelForSeq2SeqLM\r\n\r\nconfig = PeftConfig.from_pretrained(\"huggingface_path_TO_MY_PREVIOUSLY_TRAINED_LORA\")\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(\"google/flan-t5-large\") # underlying model\r\nmodel.enable_input_require_grads() # to make training possible\r\nmodel = PeftModel.from_pretrained(model, \"huggingface_path_TO_MY_PREVIOUSLY_TRAINED_LORA\")\r\n\r\nmodel.print_trainable_parameters()\r\n# trainable params: 0 || all params: 792,587,264 || trainable%: 0.0\r\n```\r\n\r\nThen the usual code:\r\n```python\r\nfrom transformers import DataCollatorForSeq2Seq\r\n\r\n# we want to ignore tokenizer pad token in the loss\r\nlabel_pad_token_id = -100\r\n# Data collator\r\ndata_collator = DataCollatorForSeq2Seq(\r\n tokenizer,\r\n model=model,\r\n label_pad_token_id=label_pad_token_id,\r\n)\r\n\r\nfrom transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments\r\n\r\noutput_dir=\"./t5-large-r32-lora-JOIN-FIX-RESUME\"\r\n#batch_size = 8\r\n\r\n# Define training args\r\ntraining_args = Seq2SeqTrainingArguments(\r\n output_dir=output_dir,\r\n auto_find_batch_size=True,\r\n save_strategy=\"steps\",\r\n save_steps=500,\r\n gradient_accumulation_steps=4,\r\n learning_rate=1e-3, # higher learning rate\r\n weight_decay=0.01,\r\n num_train_epochs=2, \r\n logging_dir=f\"{output_dir}/logs\",\r\n logging_strategy=\"steps\",\r\n logging_steps=100,\r\n push_to_hub=True)\r\n\r\n# Create Trainer instance\r\ntrainer = Seq2SeqTrainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=TRAINING,\r\n)\r\nmodel.config.use_cache = False # silence the warnings. Please re-enable for inference!\r\n\r\ntrainer.train()\r\n```\r\n\r\n> ***The loss starts where I paused the training***. Which is so low so that I can tell that it has started training the old model, but I am not sure whether I am doing it right.\r\n\r\nWill @younesbelkada you please make me sure if whatever I am doing is right?\r\nThank you.\r\n\r\n", "> I am trying to resume in this way:\r\n> \r\n> ```python\r\n> from peft import PeftModel, PeftConfig\r\n> from transformers import AutoModelForSeq2SeqLM\r\n> \r\n> config = PeftConfig.from_pretrained(\"huggingface_path_TO_MY_PREVIOUSLY_TRAINED_LORA\")\r\n> model = AutoModelForSeq2SeqLM.from_pretrained(\"google/flan-t5-large\") # underlying model\r\n> model.enable_input_require_grads() # to make training possible\r\n> model = PeftModel.from_pretrained(model, \"huggingface_path_TO_MY_PREVIOUSLY_TRAINED_LORA\")\r\n> \r\n> model.print_trainable_parameters()\r\n> # trainable params: 0 || all params: 792,587,264 || trainable%: 0.0\r\n> ```\r\n> \r\n> Then the usual code:\r\n> \r\n> ```python\r\n> from transformers import DataCollatorForSeq2Seq\r\n> \r\n> # we want to ignore tokenizer pad token in the loss\r\n> label_pad_token_id = -100\r\n> # Data collator\r\n> data_collator = DataCollatorForSeq2Seq(\r\n> tokenizer,\r\n> model=model,\r\n> label_pad_token_id=label_pad_token_id,\r\n> )\r\n> \r\n> from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments\r\n> \r\n> output_dir=\"./t5-large-r32-lora-JOIN-FIX-RESUME\"\r\n> #batch_size = 8\r\n> \r\n> # Define training args\r\n> training_args = Seq2SeqTrainingArguments(\r\n> output_dir=output_dir,\r\n> auto_find_batch_size=True,\r\n> save_strategy=\"steps\",\r\n> save_steps=500,\r\n> gradient_accumulation_steps=4,\r\n> learning_rate=1e-3, # higher learning rate\r\n> weight_decay=0.01,\r\n> num_train_epochs=2, \r\n> logging_dir=f\"{output_dir}/logs\",\r\n> logging_strategy=\"steps\",\r\n> logging_steps=100,\r\n> push_to_hub=True)\r\n> \r\n> # Create Trainer instance\r\n> trainer = Seq2SeqTrainer(\r\n> model=model,\r\n> args=training_args,\r\n> data_collator=data_collator,\r\n> train_dataset=TRAINING,\r\n> )\r\n> model.config.use_cache = False # silence the warnings. Please re-enable for inference!\r\n> \r\n> trainer.train()\r\n> ```\r\n> \r\n> > _**The loss starts where I paused the training**_. Which is so low so that I can tell that it has started training the old model, but I am not sure whether I am doing it right.\r\n> \r\n> Will @younesbelkada you please make me sure if whatever I am doing is right? Thank you.\r\n\r\nI've encountered the same issue. Have you resolved it? @AayushSameerShah ", "@XM-Dong Nah... it seems like LoRA needs some \"special script\" :(", "Hi, I'm also wondering how can I get these changes? Are they in a new transformers version or PEFT version?\r\n\r\nmy current versions are:\r\n```\r\ntransformers==4.30.1\r\npeft==0.4.0\r\n```" ]
1,686
1,697
1,687
CONTRIBUTOR
null
# What does this PR do? Fix an error occurred when Trainer tries to resume a PeftModel from a training checkpoint. That was caused since PeftModel.pre_trained saves only adapter-related data while _load_from_checkpoint expects a saved torch model. This PR fix this issue and allows the adapter checkpoint to be loaded. Resolves: #24252 Fixes #24252 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. (#24252) - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @younesbelkada
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24274/reactions", "total_count": 3, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24274/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24274", "html_url": "https://github.com/huggingface/transformers/pull/24274", "diff_url": "https://github.com/huggingface/transformers/pull/24274.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24274.patch", "merged_at": 1687262228000 }
https://api.github.com/repos/huggingface/transformers/issues/24273
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24273/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24273/comments
https://api.github.com/repos/huggingface/transformers/issues/24273/events
https://github.com/huggingface/transformers/pull/24273
1,756,760,360
PR_kwDOCUB6oc5S_Jul
24,273
Update to transformers==4.30
{ "login": "dbogunowicz", "id": 97082108, "node_id": "U_kgDOBcla_A", "avatar_url": "https://avatars.githubusercontent.com/u/97082108?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dbogunowicz", "html_url": "https://github.com/dbogunowicz", "followers_url": "https://api.github.com/users/dbogunowicz/followers", "following_url": "https://api.github.com/users/dbogunowicz/following{/other_user}", "gists_url": "https://api.github.com/users/dbogunowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/dbogunowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dbogunowicz/subscriptions", "organizations_url": "https://api.github.com/users/dbogunowicz/orgs", "repos_url": "https://api.github.com/users/dbogunowicz/repos", "events_url": "https://api.github.com/users/dbogunowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/dbogunowicz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @dbogunowicz, thanks for opening a PR.\r\n\r\nCould you add a description detailing what issue the PR address or feature it adds? " ]
1,686
1,686
1,686
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24273/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24273/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24273", "html_url": "https://github.com/huggingface/transformers/pull/24273", "diff_url": "https://github.com/huggingface/transformers/pull/24273.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24273.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24272
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24272/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24272/comments
https://api.github.com/repos/huggingface/transformers/issues/24272/events
https://github.com/huggingface/transformers/issues/24272
1,756,577,492
I_kwDOCUB6oc5osz7U
24,272
Finetuning Whisper with prompts
{ "login": "AvivSham", "id": 43371254, "node_id": "MDQ6VXNlcjQzMzcxMjU0", "avatar_url": "https://avatars.githubusercontent.com/u/43371254?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AvivSham", "html_url": "https://github.com/AvivSham", "followers_url": "https://api.github.com/users/AvivSham/followers", "following_url": "https://api.github.com/users/AvivSham/following{/other_user}", "gists_url": "https://api.github.com/users/AvivSham/gists{/gist_id}", "starred_url": "https://api.github.com/users/AvivSham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AvivSham/subscriptions", "organizations_url": "https://api.github.com/users/AvivSham/orgs", "repos_url": "https://api.github.com/users/AvivSham/repos", "events_url": "https://api.github.com/users/AvivSham/events{/privacy}", "received_events_url": "https://api.github.com/users/AvivSham/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "Hi, thanks for raising an issue! \r\n\r\nQuestions about custom training and model performance are best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.\r\n\r\nIf you believe the behaviour is due to a bug in the model, then please share all the necessary information so that we can reproduce the issue on our side: running environment; minimal code snippet. And full details about the observed behaviour e.g. example outputs and the expected behaviour.", "Hi @amyeroberts,\r\nThank you for your fast response. I have already opened a thread in [the forum](https://discuss.huggingface.co/t/finetuning-whisper-with-prompts/43053). I agree that this is not a direct bug, but also the current behavior of Whisper does not make any sense (blank transcribes + repetitions).\r\nHow should I proceed from here?\r\n\r\nThanks!", "@AvivSham You should wait to see if anyone replies to your post in the forum. I'd also suggest checking out the discord, as it's active and there's lots of people sharing ideas and helping one another with projects. ", "How can I enter the discord server? Can you please share URL / QRcode?\r\nI tried the [following link](https://discord.com/invite/hugging-face-879548962464493619) but it seems to be invalid. \r\nBTW this issue can be marked as a feature request since (as for now) I did not see a relevant code for fine-tuning Whisper with prompts. ", "Hi @AvivSham - the discord link you shared ([here](https://discord.com/invite/hugging-face-879548962464493619)), is the same one I would use, and works for me. What do you mean by it being 'invalid'? \r\n\r\n", "@amyeroberts Please see the attached image.\r\n\r\n<img width=\"514\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/43371254/730ae753-a4c3-4efe-a1d2-7b0010855ce0\">\r\n\r\n", "@AvivSham Oh no :/ I've tried making a new account with the previous link and it worked, so I'm not sure what's going on unfortunately. I'll see on our end if there's any known issues / resolutions. \r\n\r\nIn the meantime, do you already have a discord account or are you able to make one independent of this server invite? \r\n\r\n", "@amyeroberts \r\nHi Amye,\r\nI re-opened this issue since I did not get any support over discord and HF forum. I think that this issue is in high priority for DL practitioners! can you please help with it?", "Hi @AvivSham, \r\n\r\nSo that we can figure out whether this is an issue on our side, could you confirm that you have an active discord account or are able to create one (independent of the HF invite link)? ", "Hi @amyeroberts \r\nI feel like you are totally ignoring my questions :/\r\nSee my lastest message, please.", "Hi @AvivSham, \r\n\r\nI'm certainly not ignoring the questions. Please understand the we're all very busy and trying to address as many issues as possible. As the previous thread was discussing technical difficulties in joining the discord server, I'd understood that this was the ongoing issue, my apologies for misunderstanding. \r\n\r\nWith regards to training whisper with prompts, then same case applies as in my first comment. Unless there's a specific behaviour which you believe to be a bug of the model, this is a question for the forums / discord and not github issues. Not having responses isn't justification for posting in github issues as it just isn't a scalable solution. ", "Sorry for reviving this thread. I was going to create my own issue (but saw this one already existed). I do actually think this is a legitimate feature request based off of discussions in a pull request that is related to this issue. The original post is however not worded in the best manner to explain what is requested and to demonstrate the general benefit. \r\n\r\nThe relevant PR (https://github.com/huggingface/transformers/pull/22496) added prompt support for Whisper inference. In the PR a user asked whether similar support could be added for finetuning. @hollance and @sanchit-gandhi replied with ideas of how prompting support during training could be implemented and a suggestion to start a new issue (https://github.com/huggingface/transformers/pull/22496#issuecomment-1557501336, https://github.com/huggingface/transformers/pull/22496#issuecomment-1556934882) for the feature. \r\n\r\nMy alternative wording of this feature request:\r\n\r\n## Feature request\r\n\r\nHuggingface recently added support for prompting Whisper with `model.generate()` (see https://github.com/huggingface/transformers/issues/22395, https://github.com/huggingface/transformers/pull/22496). In the PR, there were discussions (https://github.com/huggingface/transformers/pull/22496#issuecomment-1557501336) of adding similar support for including parts of the previous (text) context when training and finetuning the model. It was suggested a new issue be started for the feature request, though no one ended up creating the issue.\r\n\r\nThe Whisper paper seems to suggest the general pretraining process was: \r\n\r\n* Cut audio file in to 30s chunks. \r\n* Pair the audio segments with the subset of transcripts that fall within that time. \r\n* If the \"final transcript segment\" is only partially included within the 30s audio chunk, the model is trained to only predict the start time token for the final segment. (I'm not sure if this implies that the final part of the transcript is passed as the previous context in the decoder for the next training example. I find the wording in the paper vague here.)\r\n\r\nRelevant parts of the paper:\r\n\r\n> Since our decoder is an audio-conditional language model, we also train it to condition on the history of text of the transcript in the hope that it will learn to use longer-range text context to resolve ambiguous audio. Specifically, with some probability we add the transcript text preceding the current audio segment to the decoder’s context. [...] When a final transcript segment is only partially included in the current 30 second audio chunk, we predict only its start time token for the segment when in timestamp mode, to indicate that the subsequent decoding should be performed on an audio window aligned with that time, otherwise we truncate the audio to not include the segment.\r\n\r\nSupport for prompting in training/finetuning has also been requested and discussed on the HF forums:\r\n\r\nhttps://discuss.huggingface.co/t/adding-prompt-context-to-whisper-with-huggingface-transformers/31070\r\nhttps://discuss.huggingface.co/t/finetuning-whisper-with-prompts/43053\r\n\r\nI believe being able to include previous context in finetuning would be a useful feature. It would also enable users to finetune the model in a manner that is consistent with how it was pretrained (i.e. how the final segment is handled when it is only partially included in the audio). This is something that may have an effect on the robustness of finetuned models when it comes to long form transcription and timestamps. \r\n\r\nThe reason OpenAI preprocessed data in this manner during finetuning is likely because it would best mimic the kind of data it would see during inference (i.e. audio being chunked where it regularly cuts off in the middle of sentences and/or words). ", "@Lauler OK, I see. Thanks for taking the time to write up such a clear explanation and to link to all the relevant issues, PR and discussions. \r\n\r\nAs this is a feature request I'll re-open and tag it as such :) \r\n\r\ncc @sanchit-gandhi ", "Hey @AvivSham and @Lauler, really cool to see such excitement around developing Whisper fine-tuning further! Thanks both for the motivations for the feature request. \r\n\r\nIn terms of what we have to do to make the fine-tuning script work with prompted fine-tuning, it's super simple. All we have to do is update the `prepare_dataset` function to encode the prompts, the target text, and then combine them to get the labels:\r\n```python\r\ndef prepare_dataset(batch):\r\n # load and resample audio data from 48 to 16kHz\r\n audio = batch[\"audio\"]\r\n\r\n # compute log-Mel input features from input audio array \r\n batch[\"input_features\"] = feature_extractor(audio[\"array\"], sampling_rate=audio[\"sampling_rate\"]).input_features[0]\r\n\r\n # encode prompts to prompt ids - we assume that the dataset has a column `\"prompt\"` that contains the prompt for each example\r\n prompt_ids = tokenizer.get_prompt_ids(batch[\"prompt\"])\r\n\r\n # encode target text to token ids \r\n token_ids = tokenizer(batch[\"sentence\"]).input_ids\r\n\r\n # combine them to get our labels\r\n batch[\"labels\"] = prompt_ids + token_ids\r\n return batch\r\n```\r\n\r\nYou can try this with a toy example:\r\n```python\r\nfrom transformers import WhisperProcessor\r\n\r\nprocessor = WhisperProcessor.from_pretrained(\"openai/whisper-tiny\")\r\n\r\nprompt_ids = processor.get_prompt_ids(\"Nokia\")\r\ntoken_ids = processor.tokenizer(\" No kea phones are great\").input_ids\r\nlabels = prompt_ids + token_ids\r\n\r\n# let's check how the labels are decoded\r\nprint(processor.decode(labels))\r\n```\r\n**Print Output:**\r\n```\r\n'<|startofprev|> Nokia<|startoftranscript|><|notimestamps|> No kea phones are great<|endoftext|>'\r\n```\r\n\r\n-> we see the prompt `Nokia` nestled between the prompt token id and the BOS token id, and the target text nestled between the BOS and EOS token ids, which is the expected behaviour.\r\n\r\nNow the tricky bit about getting this working more generally is how we get the `prompt` column in our dataset - we can't assume that every dataset is going to have examples with a trio of (audio, target text, prompt), most ASR datasets only have (audio, target text).\r\n\r\nMaybe we could start with the LibriSpeech ASR dataset: since the dataset is taken from recorded samples of audio book narration, each sentence can be prompted with the previous one? i.e. if you have a dataset:\r\n```\r\n(audio_1, text_1)\r\n(audio_2, text_2)\r\n...\r\n(audio_n, text_n)\r\n```\r\nYou could augment it as:\r\n```\r\n(audio_2, text_2, prompt=text_1)\r\n(audio_3, text_3, prompt=text_2)\r\n...\r\n(audio_n, text_n, prompt= text_n-1)\r\n```\r\nSince we know the dataset samples are recorded sequentially? Here we just need to check that `text_i` follows on from `text_i-1` by making sure it comes from the same speaker\r\n\r\nI think this would be a good starting point for adapting the fine-tuning script, but I don't think there's a way of generalising it to work with all datasets since we don't always have the prompts available?", "For datasets where audio snippets are sequential (audiobooks) that makes sense! \r\n\r\nA complementary solution that could be more general in nature is to perhaps wait for the PR that adds support for encoding timestamp tokens _as is_ (https://github.com/huggingface/transformers/pull/24476). \r\n\r\nA general preprocessing step involving encoding a separate \"timestamp_encoded\"-column could then perhaps work for both datasets with sequential audio snippets (LibriSpeech audiobooks), and those who already have longer audio samples with more granular timestamp information.\r\n\r\nThen in the case of LibriSpeech (and any dataset with sequential audio snippets) the following preprocessing guide would apply:\r\n\r\n1. Extract the length of each audio sample and create a `duration` column.\r\n2. Encode the decoder input as `\"<|startofprev|>\" + text_n-1 + \"<|startoftranscript|><|timestamps|> <|0.00|>\" + text_n + \"<|duration_n|><|endoftext|>\"` as a separate column. \r\n\r\nIf it would be possible to train with timestamps, then a conceptually similar approach would apply to those who already have datasets with more granular timestamps. Their preprocessing would consist of creating a similar suitable column where the prompt and timestamps are already encoded properly. \r\n\r\nI'm aware that existing audio datasets on the Hub are currently mostly composed of single sentences. However, I think this is increasingly going to change with time. The question for these new datasets then becomes:\r\n\r\n* How does a user best add timestamp information to their dataset that has longer audio snippets with granular timestamps?\r\n\r\nRight now it is not obvious how such information should best be included in a HF dataset. As an example, our organization published an audio dataset of [parliamentary recordings](https://huggingface.co/datasets/KBLab/rixvox). In its original form we have sentence aligned these transcripts. However in the published version that is on the Hub, we concatenated sequential sentences and coresponding audio snippets until they fill up as much as possible of a 30s bucket. \r\n\r\nWe have been discussing the most flexible way of adding the more granular sentence-level timestamps to the dataset, with our top two choices being:\r\n\r\n* Add a nested list column with tuples of `[(start_sentence1, end_sentence1), (..., ...)]` for each audio sample.\r\n* Add an already pre-encoded version of the text for the specific model. The problem here was that i) it's a model specific solution, and ii) timestamp tokens were not included as part of the vocab in HF Whisper. \r\n\r\nI think the first option is probably the best, since it's model agnostic, and it will allow us and any user to remix and re-encode the dataset in whatever way they need. \r\n\r\nA separate question:\r\nWould the prompt ids automatically be masked in the loss calculation in the current Whisper implementation?\r\n\r\n* Edit: \r\nOn second thought I think having a separate `prompt` column would work in the more general use case as well. However, I think the point in my post about allowing (optional) pre-encoding of the `text` column with timestamps (and if necessary other special tokens) is what would make the solution more general in nature. ", "Hi @sanchit-gandhi,\r\nThank you for your reply! \r\nFollowing your reply:\r\n> https://github.com/huggingface/transformers/issues/24272#issuecomment-1633834410\r\n\r\n\r\nDo we manually need to mask the `prompt_ids` since we do not want to include those when calculating the loss? Is it dealt with internally (by looking inside the code I did not find such masking)? \r\nWhat is the right approach here?\r\n\r\nThanks in advance.\r\n\r\n ", "> \r\n\r\nHi Aviv, in the paper it says:\r\nDuring training it should “mask out the training loss over the previous context text and train the\r\nmodel to predict all other tokens”.\r\n\r\nI'm not sure how to implement it with HF Trainer. But it is an important feature (I posted on it in the forum half a year ago) and I hope you can test some ideas and see what works.", "@samuelazran Thank you for your comment. I'm looking for an official guide here since it is a bit tricky to integrate the implementation with HF.\r\n@sanchit-gandhi @Lauler Maybe you can help us with it? how should we approach this? Do we manually need to mask the prompt_ids since we do not want to include those when calculating the loss? Is it dealt with internally (by looking inside the code I did not find such masking)?\r\nWhat is the right approach here?", "If you're taking multiple audio samples < 30s and combining them to give your prompt and target text, you probably don't need the timing information within each sample. You can get the length of each sample by measuring the length of the audio array, and dividing it by the sampling rate (assuming there's little to no silence):\r\n\r\n```python\r\naudio_length_s = len(audio[\"array\"]) / audio[\"sampling_rate\"]\r\n```\r\n\r\nTiming information would be useful if you had the opposite situation, where you had very long samples that you wanted to split up into smaller ones. Here, you would split on appropriately chosen timestamps.\r\n\r\n> * How does a user best add timestamp information to their dataset that has longer audio snippets with granular timestamps?\r\n\r\nI'm not sure I fully understand this question - you want to take long audio samples and **add** timestamp information to them? Or you want to format audio samples that have existing timestamp information?\r\n\r\nAlso agree that the first option you've proposed is best! I don't think we can make a very general recommendation here as to the format your data should be in, since this is quite a niche application of fine-tuning and one that is conditional on the form of your original data. But the design you've proposed sounds sensible for your use case!\r\n\r\n> Would the prompt ids automatically be masked in the loss calculation in the current Whisper implementation?\r\n\r\nNo they wouldn't - we would need to update this. I know that @peregilk and co from NbAiLab have done something similar in Flax: https://github.com/NbAiLab/nb-whisper/blob/352bf2d0efb073405c90fb4ef048a5d52b6128b6/run_nb_flax_speech_recognition_seq2seq_streaming_dev.py#L579-L582\r\n\r\nWe would need to do the same for the PyTorch code. Would also be interested in hearing from @peregilk how you constructed your prompts + text pairs! Are we on the right lines by pairing consecutive samples of our dataset together?", "Sure @sanchit-gandhi. Our dataset consists of multiple different sources, subtitles, parliament transcripts, audio books and created datasets. In some cases we do have the text directly preceding the current sample. In our dataset, we simply add this as a separate \"pretext\"-column. In a lot of scenarios this information is not available, and here we simply leave that field empty.\r\n\r\nWe have not added timestamps to the pretext (yet). I see your point (with reference to the article) @Lauler regarding not predicting the end timestamp. We have not tried that, but one of our datasets are cut on pauses, and here we have a lot of incomplete sentences (ie not ending with punctation and not starting with capital letter). This seems to be well handled by the model.\r\n\r\nWe ended up with a dataset-format with multiple columns for each audio-clips. One sample could for instance have both text, timestamp_text, pretext, english_translation, nynorsk_transcription etc. For other samples, very few of these are filled out. This means that we for one audio-clip can generate 1-5 training samples. We have modified the data loader to be able to handle this so that we can generate the actual prompt on the fly. I can share this with you @Lauler if you are interested.\r\n\r\n@AvivSham Personally I found the masking a bit tricky. This snippet helped me a lot in understanding what was going on. Maybe you can reuse it: https://github.com/NbAiLab/nb-whisper/blob/352bf2d0efb073405c90fb4ef048a5d52b6128b6/run_nb_flax_speech_recognition_seq2seq_streaming_dev.py#L1692-L1697\r\n \r\n\r\n", "> @samuelazran Thank you for your comment. I'm looking for an official guide here since it is a bit tricky to integrate the implementation with HF. @sanchit-gandhi @Lauler Maybe you can help us with it? how should we approach this? Do we manually need to mask the prompt_ids since we do not want to include those when calculating the loss? Is it dealt with internally (by looking inside the code I did not find such masking)? What is the right approach here?\r\n\r\nHi @AvivSham , were you able to make some progress on training with prompts? if you and others are interested, let's combine forces and work on it until we find someone from Huggingface who can help. \r\n\r\nDoes anyone know who is a relevant person from HF to give us ideas or directions? maybe @patrickvonplaten?\r\n\r\n", "Hi @samuelazran, there is no significant progress from my end. Maybe someone from HF may help.", "Thanks for the comprehensive summary @peregilk! Cool to see that you're still super up for implementing this @samuelazran. I personally won't have time to generalise the fine-tuning script to use the prompted tokens in the training objective, but I'm more than happy to answer any questions / queries if you'd like to have a go yourself. \r\n\r\nIMO the most challenging bit of this integration will be constructing the (prompt, text) pairs -> I can't see a way of making this generalise across all datasets? Given a data sample `(text_i, audio_i)`, how can we know the corresponding prompt for the target text? Most ASR datasets are constructed with independent `(text, audio)` samples, so it's not trivial to find the text prompt for each sample, if it even exists.\r\n\r\nIf you'd like to pursue this, I'd recommend starting with the LibriSpeech dataset (for which I left some details here: https://github.com/huggingface/transformers/issues/24272#issuecomment-1633834410)", "Hi,\r\n\r\nI found this thread really interesting. Last month I suggested what it could be a simple starting point to prepare the dataset with prompts in the Huggingface Forum. See https://discuss.huggingface.co/t/finetuning-whisper-with-prompts/43053/3?u=andercorral\r\n \r\n@sanchit-gandhi I think this could make your solution consistent with the API. I've also added prompts with some probability as stated in the original paper:\r\n\r\n```python\r\ndef prepare_dataset(batch):\r\n # load and resample audio data from 48 to 16kHz\r\n audio = batch[\"audio\"]\r\n\r\n # compute log-Mel input features from input audio array \r\n batch[\"input_features\"] = feature_extractor(audio[\"array\"], sampling_rate=audio[\"sampling_rate\"]).input_features[0]\r\n\r\n # encode prompts and target text to prompt ids - we assume that the dataset has a column `\"prompt\"` that contains the prompt for each example\r\n if random.uniform(0,1) > 0.5:\r\n token_ids = tokenizer(batch[\"sentence\"], batch[\"prompt\"]).input_ids\r\n else:\r\n token_ids = tokenizer(batch[\"sentence\"]).input_ids\r\n\r\n batch[\"labels\"] = token_ids\r\n return batch\r\n```\r\n\r\n", "@sanchit-gandhi\r\n\r\n> Thanks for the comprehensive summary @peregilk! Cool to see that you're still super up for implementing this @samuelazran. I personally won't have time to generalise the fine-tuning script to use the prompted tokens in the training objective, but I'm more than happy to answer any questions / queries if you'd like to have a go yourself.\r\n> \r\n> IMO the most challenging bit of this integration will be constructing the (prompt, text) pairs -> I can't see a way of making this generalise across all datasets? Given a data sample `(text_i, audio_i)`, how can we know the corresponding prompt for the target text? Most ASR datasets are constructed with independent `(text, audio)` samples, so it's not trivial to find the text prompt for each sample, if it even exists.\r\n> \r\n> If you'd like to pursue this, I'd recommend starting with the LibriSpeech dataset (for which I left some details here: [#24272 (comment)](https://github.com/huggingface/transformers/issues/24272#issuecomment-1633834410))\r\n\r\nThank you for your reply. However, I don't think that providing the dataset in a certain format is the biggest challenge. I'd be glad to be able to provide the data in any way that works. The most important thing is to be able to have a batch with multiple items some contains prompts and some do not, the question is how to handle it during training.", "Hey @samuelazran - for creating batches with prompted and non-prompted data, you can do the random switching in the `prepare_dataset` function as shown very nicely by @anderleich", "> \r\n\r\n\r\n\r\n> Hey @samuelazran - for creating batches with prompted and non-prompted data, you can do the random switching in the `prepare_dataset` function as shown very nicely by @anderleich\r\n\r\nBut during training, how do you ensure calculating loss only on the tokens that come after the prompt? \r\nWe need to make sure the model will not generate the prompt part and start generating from the transcript labels or at least ignore the loss over the prompt tokens as in the original paper:\r\n\r\n_\"We only mask out the training loss over the previous context text, and train the model to predict all other tokens.\"_\r\nhttps://arxiv.org/pdf/2212.04356.pdf\r\n\r\nThis is the main challenge. I'm looking for insights about implementing it with Huggingface Transformers and especially using Transformers Trainer. ", "Hey @samuelazran - the masking part it pretty easy. Here, we just set the labels to `-100` (a very large negative number) for the prompt, so that they ignored from the loss. Here's how you would do this in numpy: https://github.com/NbAiLab/nb-whisper/blob/352bf2d0efb073405c90fb4ef048a5d52b6128b6/run_nb_flax_speech_recognition_seq2seq_streaming_dev.py#L579-L582\r\n\r\nYou could port this to torch and it would be quite straightforward this way! I still maintain that getting the dataset in the correct format is the toughest part to generalise. ", "I have a follow-up question related to finetuning whisper in general. Whisper consumes lots of GPU memory (>30GB for `medium` sized model). What is the current way to use DDP or FSDP with Whisper? ([found this related issue](https://github.com/huggingface/transformers/issues/23651)) When we tried these strategies (on `4 v100 GPUs 16GB` each) we witnessed that most of the memory is stored on the first GPU instead of equally balanced between all four.\r\nWe aim to train the model using multiple V100 16GB cards rather than large memory GPUs, which was possible if the memory was equally spread across multiple cards.\r\nThis is an extremely annoying problem due to the shortage of large memory GPUs.\r\nCan you please help?\r\n@connor-henderson @sanchit-gandhi ", "`torch.distributed.launch` should still work (although will be deprecated soon in place of [`torchrun`](https://pytorch.org/docs/stable/elastic/run.html)): https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition#multi-gpu-whisper-training" ]
1,686
1,701
null
NONE
null
### Feature request Training code implementation for finetuning Whisper using prompts. Hi All, I’m trying to finetune Whisper by resuming its pre-training task and adding initial prompts as part of the model’s forward pass. I saw this [amazing tutorial](https://huggingface.co/blog/fine-tune-whisper), however, it does not contain a section about using prompts as part of the fine-tuning dataset. ### Motivation We witness that Whisper is not acting as expected when transcribing with prompts. Sometimes the output is blank text and on other occasions the output text contains reoccurrence. We want to solve such behaviors by fine-tuning Whisper with prompts. ### Your contribution Open for ideas.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24272/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24272/timeline
reopened
null
null
https://api.github.com/repos/huggingface/transformers/issues/24271
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24271/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24271/comments
https://api.github.com/repos/huggingface/transformers/issues/24271/events
https://github.com/huggingface/transformers/pull/24271
1,756,494,532
PR_kwDOCUB6oc5S-Q8R
24,271
Fix URL in comment for contrastive loss function
{ "login": "taepd", "id": 49802647, "node_id": "MDQ6VXNlcjQ5ODAyNjQ3", "avatar_url": "https://avatars.githubusercontent.com/u/49802647?v=4", "gravatar_id": "", "url": "https://api.github.com/users/taepd", "html_url": "https://github.com/taepd", "followers_url": "https://api.github.com/users/taepd/followers", "following_url": "https://api.github.com/users/taepd/following{/other_user}", "gists_url": "https://api.github.com/users/taepd/gists{/gist_id}", "starred_url": "https://api.github.com/users/taepd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/taepd/subscriptions", "organizations_url": "https://api.github.com/users/taepd/orgs", "repos_url": "https://api.github.com/users/taepd/repos", "events_url": "https://api.github.com/users/taepd/events{/privacy}", "received_events_url": "https://api.github.com/users/taepd/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> In the comment for contrastive loss in `src/transformers/models/clip/modeling_clip.py`, the source URL was not working correctly, so I fixed it to the correct address. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24271/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24271/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24271", "html_url": "https://github.com/huggingface/transformers/pull/24271", "diff_url": "https://github.com/huggingface/transformers/pull/24271.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24271.patch", "merged_at": 1686737312000 }
https://api.github.com/repos/huggingface/transformers/issues/24270
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24270/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24270/comments
https://api.github.com/repos/huggingface/transformers/issues/24270/events
https://github.com/huggingface/transformers/pull/24270
1,756,405,239
PR_kwDOCUB6oc5S99dF
24,270
`Pix2StructImageProcessor` requires `torch>=1.11.0`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
COLLABORATOR
null
# What does this PR do? So let's be nice to past CI ❤️ ! It's the argument `antialias` in `interpolate` only supported in `torch>=1.11.0`: ``` torch.nn.functional.interpolate(..., antialias=True) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24270/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24270/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24270", "html_url": "https://github.com/huggingface/transformers/pull/24270", "diff_url": "https://github.com/huggingface/transformers/pull/24270.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24270.patch", "merged_at": 1686755141000 }
https://api.github.com/repos/huggingface/transformers/issues/24269
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24269/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24269/comments
https://api.github.com/repos/huggingface/transformers/issues/24269/events
https://github.com/huggingface/transformers/issues/24269
1,756,390,295
I_kwDOCUB6oc5osGOX
24,269
Flax LMHeadModel for common models like Bert and Albert
{ "login": "gianlucadetommaso", "id": 32386694, "node_id": "MDQ6VXNlcjMyMzg2Njk0", "avatar_url": "https://avatars.githubusercontent.com/u/32386694?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gianlucadetommaso", "html_url": "https://github.com/gianlucadetommaso", "followers_url": "https://api.github.com/users/gianlucadetommaso/followers", "following_url": "https://api.github.com/users/gianlucadetommaso/following{/other_user}", "gists_url": "https://api.github.com/users/gianlucadetommaso/gists{/gist_id}", "starred_url": "https://api.github.com/users/gianlucadetommaso/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gianlucadetommaso/subscriptions", "organizations_url": "https://api.github.com/users/gianlucadetommaso/orgs", "repos_url": "https://api.github.com/users/gianlucadetommaso/repos", "events_url": "https://api.github.com/users/gianlucadetommaso/events{/privacy}", "received_events_url": "https://api.github.com/users/gianlucadetommaso/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Alright, I found out that LMHeadModel is an old naming choice that was kept not to break compatibility. In Flax, I should be able to do what I want with `FlaxBertForCausalLM`." ]
1,686
1,686
1,686
NONE
null
### Feature request A LM Head Model for common models such as Bert and Albert is available in both PyTorch and TensorFlow, but it appears to be missing in Flax. ### Motivation We are developing a library in JAX and Flax for uncertainty quantification, and we rely on Hugging Face transformers written in Flax. ### Your contribution I would love to contribute. However, unfortunately I might not have much time to submit a PR myself.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24269/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24269/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24268
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24268/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24268/comments
https://api.github.com/repos/huggingface/transformers/issues/24268/events
https://github.com/huggingface/transformers/issues/24268
1,756,327,355
I_kwDOCUB6oc5or227
24,268
pytest jax,jaxlib,flax versions incompatibility
{ "login": "darxradi3nt", "id": 136311479, "node_id": "U_kgDOCB_ytw", "avatar_url": "https://avatars.githubusercontent.com/u/136311479?v=4", "gravatar_id": "", "url": "https://api.github.com/users/darxradi3nt", "html_url": "https://github.com/darxradi3nt", "followers_url": "https://api.github.com/users/darxradi3nt/followers", "following_url": "https://api.github.com/users/darxradi3nt/following{/other_user}", "gists_url": "https://api.github.com/users/darxradi3nt/gists{/gist_id}", "starred_url": "https://api.github.com/users/darxradi3nt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/darxradi3nt/subscriptions", "organizations_url": "https://api.github.com/users/darxradi3nt/orgs", "repos_url": "https://api.github.com/users/darxradi3nt/repos", "events_url": "https://api.github.com/users/darxradi3nt/events{/privacy}", "received_events_url": "https://api.github.com/users/darxradi3nt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Could you try:\r\n```\r\npip install -e \".[flax]\"\r\n```\r\nTo get the correct version of all three packages?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "More detailed instructions for installing Flax & Transformers can be found here: https://github.com/huggingface/transformers/tree/main/examples/research_projects/jax-projects#how-to-install-relevant-libraries\r\n\r\nLeaving this one closed for now. Feel free to open if the above guide doesn't solve your issue and I can take another look" ]
1,686
1,690
1,690
NONE
null
### System Info When running pytest (```pytest --collect-only -q``` is enough), it fails due to ``` AttributeError: module 'jax.tree_util' has no attribute 'register_pytree_with_keys_class' ``` From setup.py deps: ``` "jax>=0.2.8,!=0.3.2,<=0.3.6", "jaxlib>=0.1.65,<=0.3.6",** ``` ## Fix: pip install jax==0.3.25 jaxlib==0.3.25 flax==0.6.2 ### Who can help? @sanchit-gandhi ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction pytest --collect-only ./tests/models/<some_model> ### Expected behavior "tests collected" without any error
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24268/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24268/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24267
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24267/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24267/comments
https://api.github.com/repos/huggingface/transformers/issues/24267/events
https://github.com/huggingface/transformers/pull/24267
1,756,303,363
PR_kwDOCUB6oc5S9nd6
24,267
Skip some `TQAPipelineTests` tests in past CI
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
COLLABORATOR
null
# What does this PR do? A continuation of #24251. `TapasModel` is used in pipeline tests (`TQAPipelineTests`) and we need torch >=12 there too. (didn't check all failures in one go before opening PRs 😅 )
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24267/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24267/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24267", "html_url": "https://github.com/huggingface/transformers/pull/24267", "diff_url": "https://github.com/huggingface/transformers/pull/24267.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24267.patch", "merged_at": 1686745524000 }
https://api.github.com/repos/huggingface/transformers/issues/24266
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24266/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24266/comments
https://api.github.com/repos/huggingface/transformers/issues/24266/events
https://github.com/huggingface/transformers/pull/24266
1,756,150,328
PR_kwDOCUB6oc5S9GaS
24,266
Fix bug in slow tokenizer conversion, make it a lot faster
{ "login": "stephantul", "id": 8882233, "node_id": "MDQ6VXNlcjg4ODIyMzM=", "avatar_url": "https://avatars.githubusercontent.com/u/8882233?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stephantul", "html_url": "https://github.com/stephantul", "followers_url": "https://api.github.com/users/stephantul/followers", "following_url": "https://api.github.com/users/stephantul/following{/other_user}", "gists_url": "https://api.github.com/users/stephantul/gists{/gist_id}", "starred_url": "https://api.github.com/users/stephantul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stephantul/subscriptions", "organizations_url": "https://api.github.com/users/stephantul/orgs", "repos_url": "https://api.github.com/users/stephantul/repos", "events_url": "https://api.github.com/users/stephantul/events{/privacy}", "received_events_url": "https://api.github.com/users/stephantul/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Speed info: the new implementation takes 70 ms, the old one took 123 seconds for the `openlm-research/open_llama_7b` tokenizer mentioned in the issue.", "_The documentation is not available anymore as the PR was closed or merged._", "> Really nice fix and improvement - thanks for working on this ❤️\r\n> \r\n> Logic all looks good to me. There's a test that's failing, but it's decorated with `@is_flaky` so shouldn't be preventing CI being green here. @ydshieh any insights into what might be happening?\r\n\r\n@amyeroberts \r\n\r\n`is_flacky()` won't keep the test green 100%. It just runs the test a few times (default `5`) 😿 . The failing is still expected but less frequently.", "Sorry for the weird error. I forgot to re-run tests after the second commit" ]
1,686
1,686
1,686
CONTRIBUTOR
null
# What does this PR do? The slow tokenizer conversion currently has a bug where merges with a score of 0 do not get used due to an erroneous check. The check simply tested truthiness, but was actually looking for a `None`. During fixing, I noticed that the code was also slow, so I made it a lot faster. Fixes #24233 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24266/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24266/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24266", "html_url": "https://github.com/huggingface/transformers/pull/24266", "diff_url": "https://github.com/huggingface/transformers/pull/24266.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24266.patch", "merged_at": 1686818517000 }
https://api.github.com/repos/huggingface/transformers/issues/24265
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24265/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24265/comments
https://api.github.com/repos/huggingface/transformers/issues/24265/events
https://github.com/huggingface/transformers/issues/24265
1,756,026,869
I_kwDOCUB6oc5oqtf1
24,265
`BloomForSequenceClassification` output is sensitive to `padding_side` and `max_length`
{ "login": "linhdvu14", "id": 13968867, "node_id": "MDQ6VXNlcjEzOTY4ODY3", "avatar_url": "https://avatars.githubusercontent.com/u/13968867?v=4", "gravatar_id": "", "url": "https://api.github.com/users/linhdvu14", "html_url": "https://github.com/linhdvu14", "followers_url": "https://api.github.com/users/linhdvu14/followers", "following_url": "https://api.github.com/users/linhdvu14/following{/other_user}", "gists_url": "https://api.github.com/users/linhdvu14/gists{/gist_id}", "starred_url": "https://api.github.com/users/linhdvu14/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/linhdvu14/subscriptions", "organizations_url": "https://api.github.com/users/linhdvu14/orgs", "repos_url": "https://api.github.com/users/linhdvu14/repos", "events_url": "https://api.github.com/users/linhdvu14/events{/privacy}", "received_events_url": "https://api.github.com/users/linhdvu14/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "(bump)", "Hey! Thanks for opening this issue! \r\nSeems to rather be related to [this](https://github.com/huggingface/transformers/blob/v4.30.1/src/transformers/models/bloom/modeling_bloom.py#L1072) line, where we define the sequence length tensor. \r\nMost of our models that compute partial pooled logits use this. Can you try something like \r\n```python \r\n if input_ids is not None:\r\n sequence_lengths = (torch.eq(input_ids, self.config.pad_token_id).long().argmax(-1) - 1).to(logits.device)\r\n```\r\nI'll open a PR to fix it! ", "Thanks @ArthurZucker, the fix works great.\r\n\r\nSeems the PR misses a few models: biogpt, bloom, falcon, mpt.", "There was a follow up PR: #25085, might have forgotten other models! " ]
1,686
1,690
1,690
NONE
null
### System Info - `transformers` version: 4.30.0.dev0 - Platform: Linux-5.15.0-18-shopee-generic-x86_64-with-glibc2.31 - Python version: 3.10.8 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? text models: @ArthurZucker and @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I found that `BloomForSequenceClassification` (possibly also other causal models) produces non-deterministic outputs based on `max_length` when tokenizer `padding_side = "left"`. It might be caused by this line: https://github.com/huggingface/transformers/blob/v4.30.1/src/transformers/models/bloom/modeling_bloom.py#L1080 which seems to assume right padding. If this diagnostic is correct, imho it's quite unintuitive and error-prone, as: 1) bloom's default `padding_side` is `left`, and 2) many tutorials (e.g. [peft P-tuning for sequence classification](https://huggingface.co/docs/peft/main/en/task_guides/ptuning-seq-classification#preprocess-dataset)) recommend setting `padding_side = "left"` for causal models. Could you provide some guidance? What's the correct way to use causal models for sequence classification? Sample to reproduce: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, set_seed set_seed(123) text = "Paris, France's capital, is a major European city and a global center for art, fashion, gastronomy and culture." def f(text, tokenizer, model): emb = tokenizer(text, return_tensors='pt') logits = model(**emb).logits.detach().numpy() print(f'no padding: {logits=}') for max_length in [50, 100, 200]: emb = tokenizer(text, padding='max_length', max_length=max_length, return_tensors='pt') logits = model(**emb).logits.detach().numpy() print(f'pad to {max_length=}: {logits=}') # non-deterministic def clm_left(): pretrain = 'bigscience/bloomz-560m' tokenizer = AutoTokenizer.from_pretrained(pretrain) model = AutoModelForSequenceClassification.from_pretrained(pretrain) f(text, tokenizer, model) # >>> no padding: logits=array([[15.1557665, 31.423962 ]], dtype=float32) # >>> pad to max_length=50: logits=array([[ 8.255632, 23.838833]], dtype=float32) # >>> pad to max_length=100: logits=array([[ 1.263773, 12.405185]], dtype=float32) # >>> pad to max_length=200: logits=array([[0.79204845, 8.847221 ]], dtype=float32) # ok def clm_right(): pretrain = 'bigscience/bloomz-560m' tokenizer = AutoTokenizer.from_pretrained(pretrain) tokenizer.padding_side = 'right' model = AutoModelForSequenceClassification.from_pretrained(pretrain) f(text, tokenizer, model) # >>> no padding: logits=array([[15.1557665, 31.423962 ]], dtype=float32) # >>> pad to max_length=50: logits=array([[15.1557665, 31.423962 ]], dtype=float32) # >>> pad to max_length=100: logits=array([[15.155769, 31.42395 ]], dtype=float32) # >>> pad to max_length=200: logits=array([[15.155751, 31.423967]], dtype=float32) if __name__ == '__main__': clm_left() ``` ### Expected behavior Model should produce the same outputs regardless of padding length
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24265/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24265/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24264
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24264/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24264/comments
https://api.github.com/repos/huggingface/transformers/issues/24264/events
https://github.com/huggingface/transformers/issues/24264
1,755,881,117
I_kwDOCUB6oc5oqJ6d
24,264
MeZo Forward Pass Implementation
{ "login": "thistleknot", "id": 5154106, "node_id": "MDQ6VXNlcjUxNTQxMDY=", "avatar_url": "https://avatars.githubusercontent.com/u/5154106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thistleknot", "html_url": "https://github.com/thistleknot", "followers_url": "https://api.github.com/users/thistleknot/followers", "following_url": "https://api.github.com/users/thistleknot/following{/other_user}", "gists_url": "https://api.github.com/users/thistleknot/gists{/gist_id}", "starred_url": "https://api.github.com/users/thistleknot/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thistleknot/subscriptions", "organizations_url": "https://api.github.com/users/thistleknot/orgs", "repos_url": "https://api.github.com/users/thistleknot/repos", "events_url": "https://api.github.com/users/thistleknot/events{/privacy}", "received_events_url": "https://api.github.com/users/thistleknot/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "cc @sgugger and @pacman100 who know more about `Trainer` and integrations ", "Should this be integrated with PEFT instead? https://github.com/huggingface/peft", "Anw the motivation is not faster training; in fact it ought to be slower as far as I understand. Rather, it is lower memory requirement.", "You are correct. I misread/transcribed that. I read x12 memory saved as\r\nx12 more context available which leads to faster inference.\r\n\r\nOn Sun, Jun 18, 2023 at 4:44 AM jon-chuang ***@***.***> wrote:\r\n\r\n> Anw the motivation is not faster training; in fact it ought to be slower\r\n> as far as I understand. Rather, it is lower memory requirement.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/24264#issuecomment-1596114214>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/ABHKKOSM7S3TL3DWEYH67UTXL3S3JANCNFSM6AAAAAAZFUOVYQ>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n", "Created an issue in peft. Wasn't aware hf managed both." ]
1,686
1,687
null
NONE
null
### Feature request https://github.com/princeton-nlp/MeZO/blob/main/large_models/trainer.py ### Motivation Faster training ### Your contribution Just a user atm.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24264/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24264/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/24263
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24263/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24263/comments
https://api.github.com/repos/huggingface/transformers/issues/24263/events
https://github.com/huggingface/transformers/issues/24263
1,755,878,127
I_kwDOCUB6oc5oqJLv
24,263
Is time to update the transformers dependence in README.
{ "login": "luoling1993", "id": 16378228, "node_id": "MDQ6VXNlcjE2Mzc4MjI4", "avatar_url": "https://avatars.githubusercontent.com/u/16378228?v=4", "gravatar_id": "", "url": "https://api.github.com/users/luoling1993", "html_url": "https://github.com/luoling1993", "followers_url": "https://api.github.com/users/luoling1993/followers", "following_url": "https://api.github.com/users/luoling1993/following{/other_user}", "gists_url": "https://api.github.com/users/luoling1993/gists{/gist_id}", "starred_url": "https://api.github.com/users/luoling1993/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/luoling1993/subscriptions", "organizations_url": "https://api.github.com/users/luoling1993/orgs", "repos_url": "https://api.github.com/users/luoling1993/repos", "events_url": "https://api.github.com/users/luoling1993/events{/privacy}", "received_events_url": "https://api.github.com/users/luoling1993/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@luoling1993 Indeed! Would you like to open a PR to update this, so that you get the contribution? ", "take", "Hi @amyeroberts ,\r\nAs I can see this has not been fixed yet, I would love to raise a pull request to resolve this. May I know the latest versions it has been tested on? My apologies as I am new to this.", "@sqali - great! Happy to hear you want to make this contribution. \r\n\r\nThe supported versions can be found in [setup.py](https://github.com/huggingface/transformers/blob/main/setup.py).\r\n", "Hi @amyeroberts,\r\n\r\nI have made the following changes as per the setup.py file. However, I couldn't find PyTorch specifically mentioned, so I used Torch instead. Please review and kindly correct me if there are any mistakes. I have provided two formats below, and I would greatly appreciate your confirmation before I proceed with raising the pull request.\r\n\r\n1.) \"This repository has been tested with Python 3.7.0+, Flax >= 0.4.1 & <= 0.6.9, Torch >= 1.9 & != 1.12.0, and TensorFlow >= 2.4 & < 2.13.\"\r\n\r\n2.) This repository is tested on the following versions:\r\n\r\n- Python: 3.7.0+\r\n- Flax: >= 0.4.1 & <= 0.6.9\r\n- Torch: >= 1.9 & != 1.12.0\r\n- TensorFlow: >= 2.4 & < 2.13\r\n\r\nI kindly request your guidance and feedback regarding these changes. If you have any further suggestions or modifications, please let me know.\r\n\r\nThank you for your assistance.", "@sqali Thanks for pulling this info. For the update, it's best to follow the current pattern in the docs: \r\n\r\n`This repository is tested on Python 3.7+, Flax 0.4.1+, PyTorch 1.9+ and TensorFlow 2.4+.`\r\n\r\nLet's open a PR, and we can discuss any additional changes before merging there. ", "Hi @amyeroberts ,\r\nThanks for the confirmation. Was a little skepitcal about the format. I will raise the PR now. \r\nThanks for the assistance.", "Hi @amyeroberts ,\r\n\r\nI have raised the pull request and it has been approved by Sylvain Gugger. It has passed all the required checks and is ready to be merged. I would like to thank you for giving me the opportunity to raise the PR and make this contribution.\r\n\r\nThanks", "Thanks for fixing and congrats on getting your first contribution merged in! " ]
1,686
1,687
1,687
NONE
null
In README docs, it say `This repository is tested on Python 3.6+, Flax 0.3.2+, PyTorch 1.3.1+ and TensorFlow 2.3+.`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24263/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24263/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24262
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24262/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24262/comments
https://api.github.com/repos/huggingface/transformers/issues/24262/events
https://github.com/huggingface/transformers/pull/24262
1,755,854,252
PR_kwDOCUB6oc5S8Ggm
24,262
Fixing RotaryEmbedding.forward to return float16 values in float16 precision mode.
{ "login": "kikutakou", "id": 3138146, "node_id": "MDQ6VXNlcjMxMzgxNDY=", "avatar_url": "https://avatars.githubusercontent.com/u/3138146?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kikutakou", "html_url": "https://github.com/kikutakou", "followers_url": "https://api.github.com/users/kikutakou/followers", "following_url": "https://api.github.com/users/kikutakou/following{/other_user}", "gists_url": "https://api.github.com/users/kikutakou/gists{/gist_id}", "starred_url": "https://api.github.com/users/kikutakou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kikutakou/subscriptions", "organizations_url": "https://api.github.com/users/kikutakou/orgs", "repos_url": "https://api.github.com/users/kikutakou/repos", "events_url": "https://api.github.com/users/kikutakou/events{/privacy}", "received_events_url": "https://api.github.com/users/kikutakou/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I will investigate whether or not this is the source of instabilities in Llama2! If so, will adresse it", "No time to deep dive into this at the moment! If someone wants to check this feel free to do so! 😉 ", "@ArthurZucker\r\n\r\nThanks for the comment.\r\n\r\n> The initialisation with torch.float16 as an argument of from_pretrained is not really doing it's job.\r\n\r\nI've investigated and changed the patch to fix this issue.\r\nCould you have a look at this patch?\r\n\r\n`from_pretrained` changes torch default_dtype to the specified dtype, then initialize all weights.\r\n[`GPTNeoXRotaryEmbedding.__init__()` calls `float()`](https://github.com/huggingface/transformers/blob/57943630e24651e6d954b912e7fcdb2b4c719cc4/src/transformers/models/gpt_neox/modeling_gpt_neox.py#L300C1-L301C1) which always returns float32 even when default dtype is float16.\r\nThis was the reason.\r\n", "This was actually fixed by #25830 ! ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,686
1,700
1,700
NONE
null
# What does this PR do? RotaryEmbedding.forward() returns values with float32 precision even in float16 precision mode. This affects to the subsequent calculation and takes extra GPU memory usage. This PR fixes that problem. Fixes #24261 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24262/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24262/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24262", "html_url": "https://github.com/huggingface/transformers/pull/24262", "diff_url": "https://github.com/huggingface/transformers/pull/24262.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24262.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24261
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24261/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24261/comments
https://api.github.com/repos/huggingface/transformers/issues/24261/events
https://github.com/huggingface/transformers/issues/24261
1,755,842,599
I_kwDOCUB6oc5oqAgn
24,261
GPTNeoXAttention takes extra GPU memory footprint in torch.float16 precision mode.
{ "login": "kikutakou", "id": 3138146, "node_id": "MDQ6VXNlcjMxMzgxNDY=", "avatar_url": "https://avatars.githubusercontent.com/u/3138146?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kikutakou", "html_url": "https://github.com/kikutakou", "followers_url": "https://api.github.com/users/kikutakou/followers", "following_url": "https://api.github.com/users/kikutakou/following{/other_user}", "gists_url": "https://api.github.com/users/kikutakou/gists{/gist_id}", "starred_url": "https://api.github.com/users/kikutakou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kikutakou/subscriptions", "organizations_url": "https://api.github.com/users/kikutakou/orgs", "repos_url": "https://api.github.com/users/kikutakou/repos", "events_url": "https://api.github.com/users/kikutakou/events{/privacy}", "received_events_url": "https://api.github.com/users/kikutakou/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "PR is prepared as #24262", "cc @younesbelkada @ArthurZucker ", "@younesbelkada @ArthurZucker\r\nHi. This is just a friendly reminder. ", "Hi @kikutakou \r\nFor fp16 models it is important to calculate the attention scores in full precision, mainly for numerical stability reasons. Check out for instance: https://github.com/huggingface/transformers/issues/17433 or the thread in (that includes authors from OPT models) https://github.com/huggingface/transformers/pull/17437 to start with. So the computation inside attention module to calculate `attn_weights` should always stay in full precision.\r\nRegarding the positional embeddings, looking at the official implementation, it seems that indeed the positional embeddings are returned in half-precision: https://github.com/EleutherAI/gpt-neox/blob/main/megatron/model/positional_embeddings.py#L48 . Maybe @StellaAthena can help us confirm if the rotary embeddings should return fp16 values in half-precision modes", "For rope, there was an attempt to fix this here: #23837, as it seems that in the original code they are re-computed each forward, with the correct dtype. It's very detailed! ", "> Hi @kikutakou\r\n> For fp16 models it is important to calculate the attention scores in full precision, mainly for numerical stability reasons. Check out for instance: #17433 or the thread in (that includes authors from OPT models) #17437 to start with. So the computation inside attention module to calculate `attn_weights` should always stay in full precision.\r\n> Regarding the positional embeddings, looking at the official implementation, it seems that indeed the positional embeddings are returned in half-precision: https://github.com/EleutherAI/gpt-neox/blob/main/megatron/model/positional_embeddings.py#L48 . Maybe @StellaAthena can help us confirm if the rotary embeddings should return fp16 values in half-precision modes\r\n\r\nI have no reason to think that you can’t compute rotary embed signs in half-precision.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,686
1,692
1,692
NONE
null
### System Info - `transformers` version: 4.30.1 - Platform: Linux-5.15.0-1034-gcp-x86_64-with-glibc2.2.5 - Python version: 3.8.10 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): 2.11.1 (True) - Flax version (CPU?/GPU?/TPU?): 0.5.3 (cpu) - Jax version: 0.3.6 - JaxLib version: 0.3.5 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No @ArthurZucker and @younesbelkada Hi. I'm using a model, `GPTNeoXForCausalLM` (defined in `src/transformers/models/gpt_neox/modeling_gpt_neox.py`) with torch.float16 precision by calling `.from_pretrained(torch_dtype=torch.float16)`. In this mode, the model is expected to calculate in float16 precision to save GPU memory usage. However, [some of variables](https://github.com/huggingface/transformers/blob/b89fcccd44508110fd11579a554c1876bc10c0ad/src/transformers/models/gpt_neox/modeling_gpt_neox.py#L265) in this model remain float32 and don't turn to float16, and they affects the subsequent calculation. Eventually, the weight attention, which can be dominant memory consumer, is calculated in float32. GPU memory won't be saved as we expected. The following is the problem detail: 1. setup model [`GPTNeoXForCausalLM`](https://github.com/huggingface/transformers/blob/b89fcccd44508110fd11579a554c1876bc10c0ad/src/transformers/models/gpt_neox/modeling_gpt_neox.py#L598) with `torch_dtype=torch.float16` 2. [`self.cos_cached`](https://github.com/huggingface/transformers/blob/b89fcccd44508110fd11579a554c1876bc10c0ad/src/transformers/models/gpt_neox/modeling_gpt_neox.py#L264) and [`self.sin_cached`](https://github.com/huggingface/transformers/blob/b89fcccd44508110fd11579a554c1876bc10c0ad/src/transformers/models/gpt_neox/modeling_gpt_neox.py#LL265C14-L265C14) in `RotaryEmbedding` class held by `GPTNeoXAttention` are calcurated as float32 in __init__(). 3. `GPTNeoXAttention.forward()` calls `RotaryEmbedding.forward()`. 4. `RotaryEmbedding.forward()` prepare the return values in float32. 5. `GPTNeoXAttention.forward()` receives the return values in float32. 6. Hereafter, all variables including `attn_weights` are calculated in float32. 7. `attn_weights = attn_weights.to(value.dtype)` is called and `attn_weights` is returned to float16. Because of step 7, the model forward() returns the float16, but it consumes float32 GPU footprint internally. ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. Checkout to [ko_gptneox_fp16_debug branch in https://github.com/kikutakou/transformers](https://github.com/kikutakou/transformers/tree/ko_gptneox_fp16_debug) (this branch only has additional debug print code on origin/main) 2. setup model by GPTNeoXForCausalLM.from_pretrained with torch_dtype=torch.float16 3. model.forward() Here is a sample code. ``` import torch from transformers import GPTNeoXForCausalLM, GPTNeoXTokenizerFast torch.manual_seed(0) MODEL_NAME = 'cyberagent/gpt-neox-1b-japanese' # load text input_text = 'this is test' # tokenize text tokenizer = GPTNeoXTokenizerFast.from_pretrained(MODEL_NAME, use_auth_token=True) t = tokenizer(input_text, return_tensors='pt', truncation=True, padding='longest', add_special_tokens=False) input_ids = t['input_ids'].cuda() attention_mask = t['attention_mask'].cuda() input_len = len(input_ids[0]) model = GPTNeoXForCausalLM.from_pretrained(MODEL_NAME, low_cpu_mem_usage=True, use_auth_token=True, torch_dtype=torch.float16) model.eval() model.cuda() # generate generation_len = (input_len + 50) batch_params = dict(input_ids=input_ids, attention_mask=attention_mask, repetition_penalty=None, num_return_sequences=3, num_beams=1, do_sample=True, temperature=None, top_p=0.95, pad_token_id=1, max_length=generation_len) output_ids = model.generate(**batch_params).cpu()[0] # decode output_ids = output_ids[input_len:] decoded = tokenizer.decode(output_ids, skip_special_tokens=False) print(decoded) ``` ### Expected behavior It prints all dtypes if you execute on ko_gptneox_fp16_debug branch. All dtypes are expected to be float16, but actually float 32.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24261/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24261/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24260
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24260/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24260/comments
https://api.github.com/repos/huggingface/transformers/issues/24260/events
https://github.com/huggingface/transformers/issues/24260
1,755,469,370
I_kwDOCUB6oc5oolY6
24,260
Configuration
{ "login": "ErlindaEsco", "id": 87161796, "node_id": "MDQ6VXNlcjg3MTYxNzk2", "avatar_url": "https://avatars.githubusercontent.com/u/87161796?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ErlindaEsco", "html_url": "https://github.com/ErlindaEsco", "followers_url": "https://api.github.com/users/ErlindaEsco/followers", "following_url": "https://api.github.com/users/ErlindaEsco/following{/other_user}", "gists_url": "https://api.github.com/users/ErlindaEsco/gists{/gist_id}", "starred_url": "https://api.github.com/users/ErlindaEsco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ErlindaEsco/subscriptions", "organizations_url": "https://api.github.com/users/ErlindaEsco/orgs", "repos_url": "https://api.github.com/users/ErlindaEsco/repos", "events_url": "https://api.github.com/users/ErlindaEsco/events{/privacy}", "received_events_url": "https://api.github.com/users/ErlindaEsco/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Duplicate of #" ]
1,686
1,686
1,686
NONE
null
[8329381051](http://@googlemaps.com) > #@https://github.com/huggingface/transformers/blob/fe861e578f50dc9c06de33cd361d2f625017e624/src/transformers/utils/hub.py#L734 / ```python` #23655 [WlP] add transfer script
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24260/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24260/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24259
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24259/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24259/comments
https://api.github.com/repos/huggingface/transformers/issues/24259/events
https://github.com/huggingface/transformers/pull/24259
1,755,452,527
PR_kwDOCUB6oc5S6ugq
24,259
[Mask2Former] Remove SwinConfig
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "gently ping @amyeroberts to see if we could merge 🤗 " ]
1,686
1,687
1,687
CONTRIBUTOR
null
# What does this PR do? This PR removes what was probably a leftover from the Mask2Former PR, the model works without requiring those lines of code. Fixes #24244
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24259/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24259/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24259", "html_url": "https://github.com/huggingface/transformers/pull/24259", "diff_url": "https://github.com/huggingface/transformers/pull/24259.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24259.patch", "merged_at": 1687887236000 }
https://api.github.com/repos/huggingface/transformers/issues/24258
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24258/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24258/comments
https://api.github.com/repos/huggingface/transformers/issues/24258/events
https://github.com/huggingface/transformers/issues/24258
1,755,366,794
I_kwDOCUB6oc5ooMWK
24,258
Does Fine tuning of already fine tuned model forgets the previous features like weights and bias?
{ "login": "akesh1235", "id": 125154243, "node_id": "U_kgDOB3Wzww", "avatar_url": "https://avatars.githubusercontent.com/u/125154243?v=4", "gravatar_id": "", "url": "https://api.github.com/users/akesh1235", "html_url": "https://github.com/akesh1235", "followers_url": "https://api.github.com/users/akesh1235/followers", "following_url": "https://api.github.com/users/akesh1235/following{/other_user}", "gists_url": "https://api.github.com/users/akesh1235/gists{/gist_id}", "starred_url": "https://api.github.com/users/akesh1235/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akesh1235/subscriptions", "organizations_url": "https://api.github.com/users/akesh1235/orgs", "repos_url": "https://api.github.com/users/akesh1235/repos", "events_url": "https://api.github.com/users/akesh1235/events{/privacy}", "received_events_url": "https://api.github.com/users/akesh1235/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @akesh1235, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.", "Thankyou @amyeroberts mam , \r\nI have posted on forum [Link for My topic on forum ](https://discuss.huggingface.co/t/fine-tuning-the-existing-fine-tuned-model/43113?u=akesh1235) \r\n\r\nPlease acknowledge me over this\r\n@vanpelt\r\n@pvl\r\n@arfon", "@akesh1235 Great, thank you. \r\n\r\nIn future, please only @ relevant people in issues (transformers topics and the HF maintainers to ask are listed in our issues template). The people you tagged are incredibly busy people and if everyone did this it would be impossible for anyone to meaningfully address issues, PRs and notifications on github. ", "Okay mam I apologize,\r\nI'll take care of this next time\r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,686
1,690
1,690
NONE
null
Is it possible for fine tuning the already fine tuned model without losing previous features?? Suppose i fine-tune a model on "squad" dataset then I want to Incremental fine-tune the same model on some other dataset having same/different data formate and hyperparameters, does this means now model is fine-tuned on 2 datasets??? or it forgets the "squad" dataset when I fine tune on the second dataset. Please acknowledge me over this @vanpelt @pvl @arfon
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24258/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24258/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24257
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24257/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24257/comments
https://api.github.com/repos/huggingface/transformers/issues/24257/events
https://github.com/huggingface/transformers/issues/24257
1,755,358,220
I_kwDOCUB6oc5ooKQM
24,257
Add padding changes the output of BertModel
{ "login": "AaronNing", "id": 32625090, "node_id": "MDQ6VXNlcjMyNjI1MDkw", "avatar_url": "https://avatars.githubusercontent.com/u/32625090?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AaronNing", "html_url": "https://github.com/AaronNing", "followers_url": "https://api.github.com/users/AaronNing/followers", "following_url": "https://api.github.com/users/AaronNing/following{/other_user}", "gists_url": "https://api.github.com/users/AaronNing/gists{/gist_id}", "starred_url": "https://api.github.com/users/AaronNing/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AaronNing/subscriptions", "organizations_url": "https://api.github.com/users/AaronNing/orgs", "repos_url": "https://api.github.com/users/AaronNing/repos", "events_url": "https://api.github.com/users/AaronNing/events{/privacy}", "received_events_url": "https://api.github.com/users/AaronNing/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @AaronNing, thanks for opening this issue! \r\n\r\nThe (very small!) differences are arising because of the forced typing with `torch.set_default_dtype(torch.float32)`. Floating point arithmetic is inherently imprecise. By forcing tensors to be `float32`, where previously they weren't, you're introducing imprecision into architecture and ultimately the outputs. As you've observed, the difference is v. small if this is changed to `float64`. This is because the model can use more memory to more accurately represent numbers and perform calculations. \r\n\r\nBelow are the results if this type casting doesn't occur. As you can see, there is now no difference when introducing padding:\r\n![floating_point_arithmetic](https://github.com/huggingface/transformers/assets/22614925/e639bec0-2870-4008-bf35-c1a31584fc25)\r\n\r\nFor reference, when we're porting models into our library, we consider absolute differences on the order of ~1e-6 to be acceptable. ", "@amyeroberts Thanks for your reply! \r\nHowever, I removed the `torch.set_default_dtype(torch.float32)` line (with others unchanged) from the code above but still got the same figure. Did you modify other parts of the code? Thanks.\r\n\r\nAlso, according to [PyTorch's official document](https://pytorch.org/docs/stable/generated/torch.set_default_dtype.html), \r\n> When PyTorch is initialized its default floating point dtype is torch.float32\r\n\r\nSo I suspect that this operation is not what caused the inaccuracy?", "@AaronNing - you're right, my bad, I misinterpreted what `torch.set_default_dtype` was doing. Removing that was the only change I made to the code, however, like you, adding it back didn't affect behaviour and I observed the same results (no change with padding). \r\n\r\nIn this case, I suspect the differences might be due to hardware, which can affect float computations, I'm running on CPU with Mac M1. Other than that, I don't have a good guess. ", "> @AaronNing - you're right, my bad, I misinterpreted what `torch.set_default_dtype` was doing. Removing that was the only change I made to the code, however, like you, adding it back didn't affect behaviour and I observed the same results (no change with padding).\r\n> \r\n> In this case, I suspect the differences might be due to hardware, which can affect float computations, I'm running on CPU with Mac M1. Other than that, I don't have a good guess.\r\n\r\nI see, thanks. BTW do you have any idea why (in your plot) the output is different when input length = 1?", "> BTW do you have any idea why (in your plot) the output is different when input length = 1?\r\n\r\nI think this is just because the diff is calculated as: \r\n\r\n```python\r\nouts = outs - outs[0]\r\n```\r\n\r\nAnd the first element with have ~0 difference with itself. ", "@amyeroberts @AaronNing \r\n\r\nHi, I also met the same issue. I find that it may due to layernorm operation, since different sequence length will lead to different norm results.\r\n\r\nTo verify this, you can print the results [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_bert.py#L239).\r\n\r\nAnd I tested it is hardware independent and not related to python accuracy error. I believe it could be a common issue for all transformer-based models.", "@StevenTang1998 Could you share which different hardware you ran this on? Were the differences you saw from running the same script that @AaronNing provided? If not, could you share yours? ", "Yes, I have the similar results using the @AaronNing 's code.\r\n\r\n![1](https://github.com/huggingface/transformers/assets/37647985/b190d53d-8bf0-4750-bd83-06d5e72fe7cd)\r\n\r\nI find that it may due to layernorm operation, since different sequence length will lead to different norm results.\r\nTo verify this, you can print the results [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_bert.py#L239).\r\n\r\nThis the system info of mine:\r\n- `transformers` version: 4.30.2\r\n- Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.31\r\n- Python version: 3.10.10\r\n- Huggingface_hub version: 0.15.1\r\n- Safetensors version: 0.3.1\r\n- PyTorch version (GPU?): 1.13.1 (True)", "Hi @StevenTang1998, with respect to the hardware, what I meant was which hardware have you run this on to confirm that it's invariant? \r\n\r\nIf I run on my CPU I get the same as before - ~1e-7 difference across all sequence lengths\r\n\r\nIf I run on GPU, I see mostly ~ 1e-6 difference\r\n\r\n![image](https://github.com/huggingface/transformers/assets/22614925/f005bf40-a018-47f9-8948-82978bb1f788)\r\n\r\nFor my own experiments, the difference in observations seems to be arising from the hardware. ", "I conducted the experiment on RTX 3090 GPUs.\r\n", "@StevenTang1998 In order to confirm it's invariant one must run on at least two different pieces of hardware - ideally CPU and GPU - and obtain the same results.", "@amyeroberts \r\n\r\nI reran the experiments and want to reclaim that I don't believe it is an coincidence since @AaronNing and I both got the variant results. \r\n**I have printed the results before and after the layernorm. The results before the layernorm are exactly the same regardless the pad length, while the results after the layernorm get variant.**\r\nSince the layernorm will normalize all the word embeddings in one sequence, I think add pad tokens will affect the normalization results.\r\n\r\n- CPU\r\n![cpu](https://github.com/huggingface/transformers/assets/37647985/b35304d5-25c7-4e41-9884-58cb72ba40ef)\r\n\r\n- GPU (3090 and A100 have the same results)\r\n![a100](https://github.com/huggingface/transformers/assets/37647985/52b9055c-9e74-4822-b2ce-763f2117b926)\r\n\r\n", "@StevenTang1998 Thanks. It sounds reasonable that LN caused the difference. \r\n> I have printed the results before and after the layernorm. The results before the layernorm are exactly the same regardless the pad length, while the results after the layernorm get variant.\r\n\r\nIt would be very nice if you provide one or two figures to demonstrate that, though I'm already taking it. \r\n> I believe it could be a common issue for all transformer-based models.\r\n\r\nI agree. Maybe when the model is well-trained, this difference will be reasonably small (as shown in our experiments), so this is not regarded as a big issue." ]
1,686
1,690
1,690
NONE
null
### System Info - `transformers` version: 4.30.1 - Platform: Linux-4.4.0-131-generic-x86_64-with-glibc2.10 - Python version: 3.8.16 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: False - Using distributed or parallel set-up in script?: False ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python import matplotlib.pyplot as plt import numpy as np import torch from transformers import BertModel, __version__ print(f"torch version: {torch.__version__}") print(f"transformers version: {__version__}") np.random.seed(42) torch.manual_seed(42) torch.set_default_dtype(torch.float32) # Load model model = BertModel.from_pretrained('bert-base-uncased') model = model.eval() # pad and mask x = [10] m = [1] outs = [] for pads in range(511): xt = torch.LongTensor([x]) mt = torch.FloatTensor([m]) out = model(xt, attention_mask=mt).last_hidden_state[0, 0, 0].item() outs.append(out) x = x + [0] m = m + [0] # plot outs outs = np.array(outs) outs = outs - outs[0] plt.figure(figsize=(4, 4), dpi=80) plt.plot(outs) plt.show() ``` ### Expected behavior ``` torch version: 2.0.1+cu117 transformers version: 4.30.1 Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertModel: ['cls.predictions.transform.LayerNorm.weight', 'cls.predictions.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.dense.weight', 'cls.seq_relationship.bias', 'cls.seq_relationship.weight'] - This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). ``` ![test](https://github.com/huggingface/transformers/assets/32625090/cc05b73c-8ed6-42d5-ac58-7a8c51e171ad) (x: pad length; y: error) And `torch.set_default_dtype(torch.float64)` reduces this error to ~1e-15.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24257/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/24257/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24256
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24256/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24256/comments
https://api.github.com/repos/huggingface/transformers/issues/24256/events
https://github.com/huggingface/transformers/pull/24256
1,755,355,793
PR_kwDOCUB6oc5S6Zcx
24,256
Skip `GPT-J` fx tests for torch < 1.12
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
COLLABORATOR
null
# What does this PR do? After #22069, the fx tests for gpt-j with torch < 1.12 gives an error ```bash (line 941) AssertionError: Couldn't trace module: 'len' is not supported in symbolic tracing by default. If you want this call to be recorded, please call torch.fx.wrap('len') at module scope ``` Guess it's best to skip in this case, as it works with recent torch versions.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24256/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24256/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24256", "html_url": "https://github.com/huggingface/transformers/pull/24256", "diff_url": "https://github.com/huggingface/transformers/pull/24256.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24256.patch", "merged_at": 1686681207000 }
https://api.github.com/repos/huggingface/transformers/issues/24255
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24255/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24255/comments
https://api.github.com/repos/huggingface/transformers/issues/24255/events
https://github.com/huggingface/transformers/pull/24255
1,755,340,645
PR_kwDOCUB6oc5S6WKW
24,255
Fix how we detect the TF package
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @apbard as well since I think the issue was introduced in #23163, but thankfully it's a relatively easy fix!", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
MEMBER
null
Our framework detection code calls `_is_package_available()` for TensorFlow, but this code fails when only the `tensorflow-cpu` package is present. The failure occurs because `importlib_metadata.version("tensorflow")` throws an error in the version detection branch of `_is_package_available` unless the core `tensorflow` package is installed. I solved this by just calling `importlib.util.find_spec("tensorflow")` instead of `_is_package_available()`. However, we could also resolve this issue by rewriting `_is_package_available()` so that it only takes the version check branch when `return_version` is `True`. The `importlib_metadata.version()` call is only used to get the package version, but it causes the entire `_is_package_available()` call to fail if it can't find a version, even if the `importlib.util.find_spec()` call was a success. ccing @sgugger because there's a `TODO` above that function [requesting his attention](https://github.com/huggingface/transformers/blob/main/src/transformers/utils/import_utils.py#L40), so I'd like his input on the right approach here! Fixes #24253
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24255/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24255/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24255", "html_url": "https://github.com/huggingface/transformers/pull/24255", "diff_url": "https://github.com/huggingface/transformers/pull/24255.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24255.patch", "merged_at": 1686679071000 }
https://api.github.com/repos/huggingface/transformers/issues/24254
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24254/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24254/comments
https://api.github.com/repos/huggingface/transformers/issues/24254/events
https://github.com/huggingface/transformers/issues/24254
1,755,269,426
I_kwDOCUB6oc5on0ky
24,254
ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds Transformers Translation Tutorial Repro
{ "login": "SoyGema", "id": 24204714, "node_id": "MDQ6VXNlcjI0MjA0NzE0", "avatar_url": "https://avatars.githubusercontent.com/u/24204714?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SoyGema", "html_url": "https://github.com/SoyGema", "followers_url": "https://api.github.com/users/SoyGema/followers", "following_url": "https://api.github.com/users/SoyGema/following{/other_user}", "gists_url": "https://api.github.com/users/SoyGema/gists{/gist_id}", "starred_url": "https://api.github.com/users/SoyGema/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SoyGema/subscriptions", "organizations_url": "https://api.github.com/users/SoyGema/orgs", "repos_url": "https://api.github.com/users/SoyGema/repos", "events_url": "https://api.github.com/users/SoyGema/events{/privacy}", "received_events_url": "https://api.github.com/users/SoyGema/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @SoyGema 👋 \r\n\r\nFrom your exception, I believe the issue is at the data preparation stage -- it is pretty much complaining that your dataset has no labels. Have you followed the data preprocessing steps described [here](https://huggingface.co/docs/transformers/tasks/translation#preprocess)?", "Hello there @gante ! Thanks for your quick response and help ! \r\nI really appreciate it . 🥇 \r\nI´ve uploaded the notebook [here](https://github.com/SoyGema/The-Lord-of-The-Words-The-two-frameworks/blob/main/notebooks/TransformerTranslationPOCV2.ipynb) . As far as I can understand (let me know if Im missing something here ), Im using the preprocessing function. \r\n\r\nIn fact, the _tokenized_books (cell 16) returns something in the form of\r\n\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'translation', 'input_ids', 'attention_mask', 'labels'],\r\n num_rows: 1123\r\n })\r\n test: Dataset({\r\n features: ['id', 'translation', 'input_ids', 'attention_mask', 'labels'],\r\n num_rows: 281\r\n })\r\n})\r\n```\r\n\r\nAnd _data_collator_ (cell 19) returns something like\r\n\r\n```\r\nDataCollatorForSeq2Seq(tokenizer=T5Tokenizer(name_or_path='t5-small', vocab_size=32100, model_max_length=512, is_fast=False, padding_side='right', truncation_side='right', special_tokens={'eos_token': '</s>', 'unk_token': '<unk>', 'pad_token': '<pad>', 'additional_special_tokens': ['<extra_id_0>', .....\r\n```\r\n\r\nAm I missing something from the video that should be in code ?\r\nfor quick testing purposes, Im with **pt_to_en** dataset, that seems to have same characteristics. I've checked that tokenized_books function returns the same data structure type in **pt_to_en** that in **fr_to_en** dataset\r\n\r\nMy apologies in advance for the extremely notebook verbose code regarding GPU low level operation use. I am trying to optimize for that therefore all trace. \r\n\r\nThanks so so much for your time on this\r\nHappy if you can point me on the right direction! 👍 \r\n\r\n", "Hey @SoyGema 👋 \r\n\r\nYour `KerasMetricCallback` was missing `predict_with_generate=True` -- metrics that rely on text generation must pass this flag, as generating text is different from a model forward pass. It should become `metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_test_set, predict_with_generate=True)`\r\n\r\nFor future reference in case you encounter further bugs, have a look at our complete translation example: https://github.com/huggingface/transformers/blob/main/examples/tensorflow/translation/run_translation.py", "Hello there @gante 👋\r\n\r\nThanks for the reference. I'm definetly having this as a north script and also using it !\r\nBeen thinking about how to structure this exploration and also _indexing the roadblocks/bugs/solutions so other users can benefit from it_ . \r\n\r\nI'm closing this issue (as it is solved but other arised )and probably open another ones in [my own repo](https://github.com/SoyGema/The-Lord-of-The-Words-The-two-frameworks) as it goes so issues are unitary-structured . Hope this makes sense. Hope I can take it from there and not disturb you!\r\n\r\nThanks again!", "Just for Reproducibility. If someone wants to go through the script example. Documentation about flag configuration and more can be found [here](https://huggingface.co/docs/transformers/run_scripts)" ]
1,686
1,688
1,687
CONTRIBUTOR
null
### System Info ### Context Hello There! First and foremost, congrats for Transformers Translation[ tutorial](https://huggingface.co/docs/transformers/tasks/translation). 👍 It serves as a Spark for building english-to-many translation languages models! I´m following it along with TF mostly reproducing it in a jupyter Notebook with TF for mac with GPU enabled Using the following dependency versions. ``` tensorflow-macos==2.9.0 tensorflow-metal==0.5.0 transformers ==4.29.2 ``` _* NOTE : tensorflow-macos dependencies are [fixed ](https://developer.apple.com/forums/thread/721619) for ensuring GPU training_ ### Who can help? @ArthurZucker @younesbelkada @gante maybe? ### Information - [x] The official example scripts - [x] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction ### Issue Description Im finding the following error when **fitting the model** for finetunning a model coming from [TFAutoModelForSeq2SeqLM](https://huggingface.co/docs/transformers/v4.30.0/en/model_doc/auto#transformers.TFAutoModelForSeq2SeqLM) autoclass ``` with tf.device('/device:GPU:0'): model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=1, callbacks= callbacks ) ``` It is returning ``` ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds Call arguments received by layer "decoder" (type TFT5MainLayer): • self=None • input_ids=None • attention_mask=None • encoder_hidden_states=tf.Tensor(shape=(32, 96, 512), dtype=float32) • encoder_attention_mask=tf.Tensor(shape=(32, 96), dtype=int32) • inputs_embeds=None • head_mask=None • encoder_head_mask=None • past_key_values=None • use_cache=True • output_attentions=False • output_hidden_states=False • return_dict=True • training=False Call arguments received by layer "tft5_for_conditional_generation" (type TFT5ForConditionalGeneration): • self={'input_ids': 'tf.Tensor(shape=(32, 96), dtype=int64)', 'attention_mask': 'tf.Tensor(shape=(32, 96), dtype=int64)'} • input_ids=None • attention_mask=None • decoder_input_ids=None • decoder_attention_mask=None • head_mask=None • decoder_head_mask=None • encoder_outputs=None • past_key_values=None • inputs_embeds=None • decoder_inputs_embeds=None • labels=None • use_cache=None • output_attentions=None • output_hidden_states=None • return_dict=None • training=False ``` ### Backtrace Tried: * Remove callbacks : The model is trained, but of course not loaded into the Hub, nor the metrics computed * Followed #16234 , this[ comment ](https://github.com/huggingface/transformers/issues/16234#issuecomment-1071114294) and **ensured that Im using AutoTokenizer.** This glimpsed that this could be related to TFAutoModelForSeq2SeqLM . ``` model = TFAutoModelForSeq2SeqLM.from_pretrained(checkpoint) ``` Seems to be working correctly. Therefore I assume that the **pre-trained model is loaded** * Also followed #21116 and added `save_strategy=no` argument in [PushToCallBack ](https://github.com/huggingface/transformers/issues/21116#issuecomment-1382869967) , but the error persisted ### Expected behavior Model trained should be uploaded to the Hub. The folder appears empty , there is an error ### Hypothesis At this point, what Im guessing is that once I load the model I shall redefine the verbose error trace? Any help please of how to do this ? :) or how can I fix it ? Do I have to define a specific Trainer ? Any idea of where I can find this in docs?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24254/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24254/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24253
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24253/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24253/comments
https://api.github.com/repos/huggingface/transformers/issues/24253/events
https://github.com/huggingface/transformers/issues/24253
1,755,252,803
I_kwDOCUB6oc5onwhD
24,253
transformers does not detect Tensorflow when installing tensorflow-cpu package
{ "login": "faph", "id": 8397805, "node_id": "MDQ6VXNlcjgzOTc4MDU=", "avatar_url": "https://avatars.githubusercontent.com/u/8397805?v=4", "gravatar_id": "", "url": "https://api.github.com/users/faph", "html_url": "https://github.com/faph", "followers_url": "https://api.github.com/users/faph/followers", "following_url": "https://api.github.com/users/faph/following{/other_user}", "gists_url": "https://api.github.com/users/faph/gists{/gist_id}", "starred_url": "https://api.github.com/users/faph/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/faph/subscriptions", "organizations_url": "https://api.github.com/users/faph/orgs", "repos_url": "https://api.github.com/users/faph/repos", "events_url": "https://api.github.com/users/faph/events{/privacy}", "received_events_url": "https://api.github.com/users/faph/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@apbard I see you touched the Tensorflow detection logic here: https://github.com/huggingface/transformers/commit/83eda6435e7c842e55b42a529e9bf367bf2a126b. ", "cc @ydshieh @Rocketknight1 ", "Just to confirm, if I set env var `FORCE_TF_AVAILABLE` all works as expected. (I'd like to avoid that of course)", "Remark: On our CircleCI, the `tensorflow` is installed (forced by `tensorflow_text`), so there is `tensorflow` and `tensorflow_cpu` in the CI environment.", "Confirmed this issue on my end. The problem is the name discrepancy between the packages and our `_is_package_available` code. Will open a PR to fix it!", "PR is open at #24255 ", "@faph PR is merged. Since this will probably affect quite a few people, I'll leave a note here to other users who find this issue:\r\n\r\nYou should be able to resolve this by installing from `main` with `pip install --upgrade git+https://github.com/huggingface/transformers.git`. After our next version release, you can go back to just `pip install --upgrade transformers`. As this is a relatively serious issue we may do a hotfix release, but this is still under discussion.", "Appreciate the efforts @Rocketknight1 !", "Thanks for the quick bug report too @faph - our test suite wasn't checking installations with only `tensorflow-cpu`, so we never would have picked up this issue in time if you hadn't reported it!" ]
1,686
1,686
1,686
NONE
null
### System Info This issue was introduced in transformers 4.30.x. Output of pip list (partial): ``` tensorflow-cpu 2.12.0 tensorflow-estimator 2.12.0 tensorflow-intel 2.12.0 tensorflow-io-gcs-filesystem 0.31.0 transformers 4.30.1 ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Manifests itself like this: ``` None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. ImportError while loading <REDACTED> from transformers import file_utils, modeling_tf_outputs, modeling_tf_utils .venv\lib\site-packages\transformers\modeling_tf_utils.py:42: in <module> from .generation import GenerationConfig, TFGenerationMixin E ImportError: cannot import name 'TFGenerationMixin' from 'transformers.generation' (<REDACTED>) ``` ### Expected behavior No errors, Tensorflow detected.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24253/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24253/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24252
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24252/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24252/comments
https://api.github.com/repos/huggingface/transformers/issues/24252/events
https://github.com/huggingface/transformers/issues/24252
1,755,191,070
I_kwDOCUB6oc5onhce
24,252
Peft Model not resuming from Checkpoint
{ "login": "llohann-speranca", "id": 105556006, "node_id": "U_kgDOBkqoJg", "avatar_url": "https://avatars.githubusercontent.com/u/105556006?v=4", "gravatar_id": "", "url": "https://api.github.com/users/llohann-speranca", "html_url": "https://github.com/llohann-speranca", "followers_url": "https://api.github.com/users/llohann-speranca/followers", "following_url": "https://api.github.com/users/llohann-speranca/following{/other_user}", "gists_url": "https://api.github.com/users/llohann-speranca/gists{/gist_id}", "starred_url": "https://api.github.com/users/llohann-speranca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/llohann-speranca/subscriptions", "organizations_url": "https://api.github.com/users/llohann-speranca/orgs", "repos_url": "https://api.github.com/users/llohann-speranca/repos", "events_url": "https://api.github.com/users/llohann-speranca/events{/privacy}", "received_events_url": "https://api.github.com/users/llohann-speranca/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @llohann-speranca \r\nThanks for digging, flagging the issue and proposing a fix! \r\nIndeed didn't properly tried it with `resume_from_checkpoint`. Yes please could you open a PR and tag me there? \r\nThanks and looking forward to the PR!", "Hi Younes. Thank you for the reply. I will do it later today, after working\r\nhours.\r\n\r\nOn Tue, Jun 13, 2023 at 6:05 PM Younes Belkada ***@***.***>\r\nwrote:\r\n\r\n> Hi @llohann-speranca <https://github.com/llohann-speranca>\r\n> Thanks for digging, flagging the issue and proposing a fix!\r\n> Indeed didn't properly tried it with resume_from_checkpoint. Yes please\r\n> could you open a PR and tag me there?\r\n> Thanks and looking forward to the PR!\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/24252#issuecomment-1589610039>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AZFKQJWCDRDMFWVI6OYNZ23XLCFVLANCNFSM6AAAAAAZFCV3FM>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n" ]
1,686
1,687
1,687
CONTRIBUTOR
null
### System Info Running from huggingface/transformers-pytorch-gpu docker image. - `transformers` version: 4.31.0.dev0 - Platform: Linux-5.4.0-139-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @younesbelkada Since PR #24073 the Trainer does not resume from checkpoint. The Issue happens since `PeftModel.save_pretrained` saves only adapter's files, but the `model._load_from_checkpoint` method expects a full pytorch checkpoint. I worked around that by subclassing the Trainer class. I am willing to submit a PR merging the `_load_from_peft_checkpoint` with the Hugging face Trainer. ```python class PeftTrainer(Trainer): def _load_from_peft_checkpoint(self, resume_from_checkpoint, model): adapter_weights_file = os.path.join(resume_from_checkpoint, ADAPTER_WEIGHTS_NAME) adapter_safe_weights_file = os.path.join(resume_from_checkpoint, ADAPTER_SAFE_WEIGHTS_NAME) if not any( os.path.isfile(f) for f in [adapter_weights_file, adapter_safe_weights_file] ): raise ValueError(f"Can't find a valid checkpoint at {resume_from_checkpoint}") logger.info(f"Loading model from {resume_from_checkpoint}.") # Load adapters following PR # 24096 if is_peft_available() and isinstance(model, PeftModel): # If train a model using PEFT & LoRA, assume that adapter have been saved properly. if hasattr(model, "active_adapter") and hasattr(model, "load_adapter"): if os.path.exists(resume_from_checkpoint) or os.path.exists(resume_from_checkpoint): model.load_adapter(resume_from_checkpoint, model.active_adapter) # Load_adapter has no return value present, modify it when appropriate. from torch.nn.modules.module import _IncompatibleKeys load_result = _IncompatibleKeys([], []) else: logger.warning( "The intermediate checkpoints of PEFT may not be saved correctly, " f"using `TrainerCallback` to save {ADAPTER_WEIGHTS_NAME} in corresponding folders, " "here are some examples https://github.com/huggingface/peft/issues/96" ) else: logger.warning("Could not load adapter model, make sure to have `peft>=0.3.0` installed") def _load_from_checkpoint(self, resume_from_checkpoint, model=None): if model is None: model = self.model_wrapped if is_sagemaker_mp_enabled() else self.model if is_peft_available() and isinstance(model, PeftModel): # Try to load adapters before trying to load a torch model try: return self._load_from_peft_checkpoint(resume_from_checkpoint, model=model) except: return super()._load_from_checkpoint(resume_from_checkpoint, model=model) # If it is not a PeftModel, use the original _load_from_checkpoint else: return super()._load_from_checkpoint(resume_from_checkpoint, model=model) ``` ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from transformers import ( AutoModelForSequenceClassification, AutoTokenizer, TrainingArguments, Trainer, DataCollatorWithPadding, ) from peft import get_peft_config, get_peft_model, PromptTuningInit, PromptTuningConfig, TaskType, PeftType, PromptEncoderConfig import torch import os import evaluate from datasets import Dataset # P-tuning hyper-parameters model_id = "microsoft/deberta-v3-base" model = AutoModelForSequenceClassification.from_pretrained(model_id) tokenizer = AutoTokenizer.from_pretrained(model_id) # Load tokenized datesets train_ds = test_ds = Dataset.from_dict({'input_ids':[[1, 2430, 429, 92207, 303, 331, 1789, 3495, 2344, 1300, 355, 268, 1131, 270, 310, 354, 3732, 388, 2],[1, 1865, 843, 20060, 265, 483, 2196, 281, 411, 2784, 2]], 'labels':[0,0]}) PEFT_CONFIG ={"peft_type":"P_TUNING", "num_virtual_tokens": 30, "encoder_reparameterization_type": "MLP", "encoder_hidden_size": 128, "num_attention_heads": 17, } peft_config = PromptEncoderConfig( task_type="SEQ_CLS", **PEFT_CONFIG ) model = get_peft_model(model, peft_config) data_collator = DataCollatorWithPadding(tokenizer=tokenizer, padding=True, max_length=482,) training_args = TrainingArguments( output_dir='p', per_device_train_batch_size=1, per_device_eval_batch_size=1, num_train_epochs=1, load_best_model_at_end=False, save_strategy='epoch' ) trainer = Trainer( model=model, args=training_args, train_dataset=train_ds, eval_dataset=test_ds, tokenizer=tokenizer, data_collator=data_collator, compute_metrics=evaluate.load('accuracy') ) trainer.train() training_args = TrainingArguments( output_dir='p', per_device_train_batch_size=1, per_device_eval_batch_size=1, num_train_epochs=2, load_best_model_at_end=False, save_strategy='epoch' ) trainer = Trainer( model=model, args=training_args, train_dataset=train_ds, eval_dataset=test_ds, tokenizer=tokenizer, data_collator=data_collator, compute_metrics=evaluate.load('accuracy') ) trainer.train(resume_from_checkpoint=True) ``` Raises ``` ValueError Traceback (most recent call last) Cell In[26], line 92 70 training_args = TrainingArguments( 71 output_dir='p', 72 per_device_train_batch_size=1, (...) 77 78 ) 82 trainer = Trainer( 83 model=model, 84 args=training_args, (...) 89 compute_metrics=evaluate.load('accuracy') 90 ) ---> 92 trainer.train(resume_from_checkpoint=True) File /transformers/src/transformers/trainer.py:1634, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1631 raise ValueError(f"No valid checkpoint found in output directory ({args.output_dir})") 1633 if resume_from_checkpoint is not None and not is_sagemaker_mp_enabled() and not self.is_deepspeed_enabled: -> 1634 self._load_from_checkpoint(resume_from_checkpoint) 1636 # If model was re-initialized, put it on the right device and update self.model_wrapped 1637 if model_reloaded: File /transformers/src/transformers/trainer.py:2119, in Trainer._load_from_checkpoint(self, resume_from_checkpoint, model) 2114 safe_weights_index_file = os.path.join(resume_from_checkpoint, SAFE_WEIGHTS_INDEX_NAME) 2116 if not any( 2117 os.path.isfile(f) for f in [weights_file, safe_weights_file, weights_index_file, safe_weights_index_file] 2118 ): -> 2119 raise ValueError(f"Can't find a valid checkpoint at {resume_from_checkpoint}") 2121 logger.info(f"Loading model from {resume_from_checkpoint}.") 2123 if os.path.isfile(config_file): ValueError: Can't find a valid checkpoint at p/checkpoint-2 ``` ### Expected behavior The train should be resumed from Epoch 1 and proceeded up to Epoch 2.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24252/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24252/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24251
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24251/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24251/comments
https://api.github.com/repos/huggingface/transformers/issues/24251/events
https://github.com/huggingface/transformers/pull/24251
1,755,158,632
PR_kwDOCUB6oc5S5t17
24,251
Add `torch >=1.12` requirement for `Tapas`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
COLLABORATOR
null
# What does this PR do? Tapas files are changed in #20149 to use torch's `scatter`. The torch tensor's method `scatter_reduce` accept the argument `src` only for torch >= 1.12. This PR add some warnings/requirements in tapas modeling/test files to avoid test failures in past CI with torch <= 1.11. (one previous similar PR is #19851)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24251/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24251/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24251", "html_url": "https://github.com/huggingface/transformers/pull/24251", "diff_url": "https://github.com/huggingface/transformers/pull/24251.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24251.patch", "merged_at": 1686676781000 }
https://api.github.com/repos/huggingface/transformers/issues/24250
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24250/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24250/comments
https://api.github.com/repos/huggingface/transformers/issues/24250/events
https://github.com/huggingface/transformers/pull/24250
1,755,136,597
PR_kwDOCUB6oc5S5pCo
24,250
docs wrt using accelerate launcher with trainer
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
CONTRIBUTOR
null
# What does this PR do? 1. Major issue currently is the confusion about how to use accelerate launcher with Trainer. This PR addresses it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24250/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24250/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24250", "html_url": "https://github.com/huggingface/transformers/pull/24250", "diff_url": "https://github.com/huggingface/transformers/pull/24250.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24250.patch", "merged_at": 1686682866000 }
https://api.github.com/repos/huggingface/transformers/issues/24249
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24249/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24249/comments
https://api.github.com/repos/huggingface/transformers/issues/24249/events
https://github.com/huggingface/transformers/pull/24249
1,755,053,325
PR_kwDOCUB6oc5S5Wmz
24,249
update FSDP save and load logic
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
CONTRIBUTOR
null
# What does this PR do? 1. Should be merged after PR https://github.com/huggingface/accelerate/pull/1576 2. Updates the saving and loading utils for FSDP to be in sync with the latest PyTorch release.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24249/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24249/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24249", "html_url": "https://github.com/huggingface/transformers/pull/24249", "diff_url": "https://github.com/huggingface/transformers/pull/24249.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24249.patch", "merged_at": 1686683956000 }
https://api.github.com/repos/huggingface/transformers/issues/24248
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24248/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24248/comments
https://api.github.com/repos/huggingface/transformers/issues/24248/events
https://github.com/huggingface/transformers/issues/24248
1,754,980,707
I_kwDOCUB6oc5omuFj
24,248
auto_find_batch_size=True and eval_steps=ratio unexpected behavior
{ "login": "edmcman", "id": 1017189, "node_id": "MDQ6VXNlcjEwMTcxODk=", "avatar_url": "https://avatars.githubusercontent.com/u/1017189?v=4", "gravatar_id": "", "url": "https://api.github.com/users/edmcman", "html_url": "https://github.com/edmcman", "followers_url": "https://api.github.com/users/edmcman/followers", "following_url": "https://api.github.com/users/edmcman/following{/other_user}", "gists_url": "https://api.github.com/users/edmcman/gists{/gist_id}", "starred_url": "https://api.github.com/users/edmcman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/edmcman/subscriptions", "organizations_url": "https://api.github.com/users/edmcman/orgs", "repos_url": "https://api.github.com/users/edmcman/repos", "events_url": "https://api.github.com/users/edmcman/events{/privacy}", "received_events_url": "https://api.github.com/users/edmcman/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @muellerzr ", "Any chance you could provide a minimal reproducer I can test with?\r\n\r\nOtherwise please try installing via `pip install git+https://github.com/huggingface/transformers@muellerzr-ratio` to see if that fixes it? 🙏 ", "Let me try your patch first.", "With the patch, still evaling every 66 steps. Let me try to make a reproducer. It probably won't be minimal though...", "[notebook.zip](https://github.com/huggingface/transformers/files/11736222/notebook.zip)\r\n", "Looks like `max_steps` is not being updated", "Very strange. Here is some debug output:\r\n\r\n```\r\nCurrently training with a batch size of: 8\r\nThe following columns in the training set don't have a corresponding argument in `RobertaForSequenceClassification.forward` and have been ignored: Addr, Binary, Name, text. If Addr, Binary, Name, text are not expected by `RobertaForSequenceClassification.forward`, you can safely ignore this message.\r\n***** Running training *****\r\n Num examples = 223,431\r\n Num Epochs = 3\r\n Instantaneous batch size per device = 8\r\n Total train batch size (w. parallel, distributed & accumulation) = 8\r\n Gradient Accumulation steps = 1\r\n Total optimization steps = 83,787\r\n Number of trainable parameters = 83,452,418\r\n```\r\n\r\nTotal optimization steps is printing `max_steps`... :confused: ", "I see the problem I think:\r\n\r\n``` python\r\n if args.eval_steps and args.eval_steps < 1:\r\n args.eval_steps = math.ceil(max_steps * args.eval_steps)\r\n```\r\n\r\nSince this actually modifies `args.eval_steps`, the ratio will be lost the first time we run this code. E.g., this will set `args.eval_steps` to 66 and lose 0.1.", "Okay, I think it should be fixed now. Can you try again via the same branch?", "Still eval'ing at 66 :-(", "I did upload the notebook as a .zip above, but I'm trying to put it on colab to make things easier.", "I can't run it on colab because I'm out of free GPU usage, but I did upload it, and I think it should work if you have GPU access there:\r\n\r\nhttps://colab.research.google.com/drive/1A-MzFHIbWtrtO4tjf2GROAdfAueEHidw?usp=sharing", "re; Total optimization steps is printing max_steps... 😕, yes we don't perform gradient accumulation with this, so if you happen to get small enough that max steps < steps w/ reduction multiplier, that does make sense. \r\n\r\nLooking into this still. Thanks for the reproducer", "Thanks again, I'll need to run this in the AM to verify but I believe I've fixed this now by storing the steps away in a data struct before we loop over again: https://github.com/huggingface/transformers/compare/muellerzr-ratio?expand=1\r\n\r\nOnce verified I'll place a PR in", "I'm sorry to report that I still think it is broken!", "Might not be a simple solution then! 😉 I'll be off on holiday rest of this week, and I'll look at this again come next Tuesday. ", "Enjoy your holiday. If I have some spare time I'll see if I can figure out what is going wrong yet...", "Ping to keep fresh\r\n\r\nOn Thu, Jul 13, 2023, 10:02 AM github-actions[bot] -\r\n***@***.*** <github.edmcman.99c9f1b9d0.notifications#\r\n***@***.***> wrote:\r\n\r\n> This issue has been automatically marked as stale because it has not had\r\n> recent activity. If you think this still needs to be addressed please\r\n> comment on this thread.\r\n>\r\n> Please note that issues that do not follow the contributing guidelines\r\n> <https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md>\r\n> are likely to be ignored.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/24248#issuecomment-1634405576>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AAHYKZPYRUBJ3KSDAYDN3BDXQAEYXANCNFSM6AAAAAAZE5ZW3U>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Ping", "@edmcman try again, I was able to get it to evaluate at step 830 when it was reduced to 8292 total steps on my machine. ", "My script:\r\n```python\r\nimport datasets\r\nimport evaluate\r\nimport transformers\r\nfrom transformers import AutoTokenizer, AutoModelForSequenceClassification, TrainingArguments, Trainer, DataCollatorWithPadding\r\ntransformers.logging.set_verbosity_debug()\r\n\r\nmodel_name = \"huggingface/CodeBERTa-small-v1\"\r\nexp_name = \"oo-method-test-model-10percent\"\r\nsize = \"[:10%]\"\r\npush = False\r\n\r\nid2label = {0: \"func\", 1: \"method\"}\r\nlabel2id = {\"func\": 0, \"method\": 1}\r\n\r\nmodel = AutoModelForSequenceClassification.from_pretrained(model_name,\r\n id2label=id2label,\r\n label2id=label2id,\r\n num_labels=2)\r\n\r\nsmall_ds_train = datasets.load_dataset(\"ejschwartz/oo-method-test\", split=\"combined[:5%]\")\r\nsmall_ds_dev = datasets.load_dataset(\"ejschwartz/oo-method-test\", split=\"combined[:5%]\")\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\n\r\ndef tokenize_function(examples):\r\n return tokenizer(examples[\"Disassembly\"], padding=\"max_length\", truncation=True)\r\n\r\nsmall_ds_train = small_ds_train.map(tokenize_function, batched=True, num_proc=2).rename_column(\"Disassembly\", \"text\").rename_column(\"Type\", \"label\")\r\nsmall_ds_dev = small_ds_dev.map(tokenize_function, batched=True, num_proc=2).rename_column(\"Disassembly\", \"text\").rename_column(\"Type\", \"label\")\r\n\r\n\r\ndata_collator = DataCollatorWithPadding(tokenizer=tokenizer)\r\n\r\ntraining_args = TrainingArguments(output_dir=exp_name,\r\n auto_find_batch_size=True,\r\n per_device_train_batch_size=1024,\r\n per_device_eval_batch_size=1024,\r\n logging_first_step=False,\r\n evaluation_strategy=\"steps\",\r\n eval_steps=1 / 10.0\r\n )\r\n\r\nmetric = evaluate.load(\"accuracy\")\r\n\r\ndef compute_metrics(eval_pred):\r\n raise Exception(\"compute_metrics\")\r\n logits, labels = eval_pred\r\n predictions = np.argmax(logits, axis=-1)\r\n return metric.compute(predictions=predictions, references=labels)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n tokenizer=tokenizer,\r\n train_dataset=small_ds_train,\r\n eval_dataset=small_ds_dev,\r\n compute_metrics=compute_metrics,\r\n data_collator=data_collator\r\n)\r\n\r\ntrainer.train()\r\n```", "Thanks, I will try this again. It's possible I goofed and didn't reload the new code or something when I thought I did.", "Yes, it is working for me too now!\r\n\r\n(Edit: I forgot *I* added the exception for debugging :rofl:) ", "Great! I'll open a PR, thank you so much for your patience and clear bug report @edmcman ", "Finally fixed on main 😄 " ]
1,686
1,691
1,691
NONE
null
### System Info - `transformers` version: 4.30.1 - Platform: Linux-5.7.19-050719-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I don't have a full example that I can share, but I think this is a simple enough problem that one may not be needed. I am using `TrainingArguments(auto_find_batch_size=True, eval_steps=0.1, per_device_train_size=1024)`. With batch size of 1024, I have 657 steps. The eval ratio appears to be evaluated on this, with evaluation happening every 66 steps. However, the automatic batch size adjusts to 16, and a corresponding 83787 steps. But the evaluation is still performed every 66 steps. ### Expected behavior I expected the eval steps to be recomputed when the batch size updated. In the example above, I expected evaluation to occur every ~8000 steps.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24248/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/24248/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24247
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24247/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24247/comments
https://api.github.com/repos/huggingface/transformers/issues/24247/events
https://github.com/huggingface/transformers/pull/24247
1,754,940,517
PR_kwDOCUB6oc5S49wy
24,247
Fix gradient checkpointing + fp16 autocast for most models
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,687
1,687
CONTRIBUTOR
null
# What does this PR do? This PR fixes a bug users can encounter when using gradient checkpointing under fp16 autocast context manager. Currently if a user trains a model using autocast and GC the last layer's weights will never get updated. <details><summary>Handy reproducible snippet</summary> ```python import torch from transformers import AutoModelForCausalLM model_id = "facebook/opt-350m" model = AutoModelForCausalLM.from_pretrained(model_id).to(0) model.gradient_checkpointing_enable() model.train() assert model.training and model.is_gradient_checkpointing optimizer = torch.optim.Adam(model.parameters(), lr=1e-3) with torch.cuda.amp.autocast(True, dtype=torch.float16): dummy_input = torch.LongTensor([[0, 1, 0, 1]]).to(0) model.train() logits = model(dummy_input).logits loss = logits.sum() loss.backward() optimizer.step() for n, param in model.named_parameters(): if param.grad is None: print(n) ``` </details> As discussed internally, the fix seems to be to force-set `use_reentrant=False` when calling the gradient checkpointing. Putting that boolean to False lifts the restriction that the input tensors initially need to have if `use_reentrant=True` - according to PT team `use_reentrant=True` led to some silent bugs and they are planning to remove that boolean in the next releases and use `False` by deafault. This might be problematic for users that train adapters (using PEFT for example) where they will see some training performance downside. I propose a PoC to fix this for most common architectures until PyTorch remove that support for the next releases. For more context, users that train models using PEFT end up using autocast inside the trainer as they use 4bit / 8bit base models Related: #23990
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24247/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24247/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24247", "html_url": "https://github.com/huggingface/transformers/pull/24247", "diff_url": "https://github.com/huggingface/transformers/pull/24247.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24247.patch", "merged_at": 1687359900000 }
https://api.github.com/repos/huggingface/transformers/issues/24246
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24246/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24246/comments
https://api.github.com/repos/huggingface/transformers/issues/24246/events
https://github.com/huggingface/transformers/issues/24246
1,754,867,246
I_kwDOCUB6oc5omSYu
24,246
Training not converging with `transformers==4.26.1`
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, @bhavitvyamalik thanks for raising an issue! \r\n\r\nQuestions about customising scripts for your own requirements e.g. optimal warmup steps, are best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.\r\n\r\nIf the training works for the most recent version of transformers, then there's really nothing for us to do. ", "I understand and appreciate your response. I apologize for any confusion caused by raising the issue here. Since I am utilizing `adapter-transformers` for my project, which is built upon `transformers v4.26.1`, I am primarily relying on the functionality of `transformers` itself, specifically for fine-tuning Hubert. My intention was to inquire whether there might be a bug within the Trainer that could potentially explain why my training is not converging as expected or the changes I made to the code (described above) as it was giving problems with v4.26.1, hence why I brought up the issue in this context. Thank you!\r\n", "@bhavitvyamalik It's possible there was a bug. If there was, it's now been resolved and so it's not something we would spend time digging into. If you wanted to dig into this yourself, you could always use `git bisect` to find which commit introduced a change of behaviour.\r\n\r\nWe're not responsible for maintenance of third-party libraries built upon this one. I would suggest opening an issue in the `adapter-transformers` library, possibly asking for the pinned transformers version to be increased. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,686
1,689
1,689
CONTRIBUTOR
null
### System Info - `transformers` version: 4.26.1 - Platform: Linux-3.10.0-1160.71.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.15.1 - PyTorch version (GPU?): 1.9.0+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @sanchit-gandhi ### Information - [X] The official example scripts (hardly 3-4 line of code is changed) - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [] My own task or dataset (give details below) ### Reproduction ``` CUDA_VISIBLE_DEVICES=1 python run_speech_recognition_ctc.py \ --train_dataset_name="./data_sub/rs_en/train" \ --model_name_or_path="facebook/hubert-base-ls960" \ --train_dataset_name ="./data_sub/rs_en/test" \ --output_dir="./ft-full-run-test-hubert" \ --overwrite_output_dir \ --num_train_epochs="5" \ --per_device_train_batch_size="4" \ --gradient_accumulation_steps="2" \ --learning_rate="3e-4" \ --warmup_steps="300" \ --evaluation_strategy="steps" \ --text_column_name="transcription" \ --length_column_name="input_length" \ --save_steps="400" \ --eval_steps="25" \ --layerdrop="0.0" \ --save_total_limit="3" \ --freeze_feature_encoder \ --chars_to_ignore , ? . ! \ --group_by_length \ --do_train --do_eval ``` In `run_speech_recognition_ctc.py` I made these minor changes in `DataCollatorCTCWithPadding` as it won't run otherwise. With `transformers==4.30.1` it runs perfectly but I currently have 4.26.1 and the eval loss is not going down even after 3 epochs. My dataset has just 1500 samples of librispeech. ``` def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: # split inputs and labels since they have to be of different lenghts and need # different padding methods input_features = [{"input_ids": np.array(feature["input_values"])} for feature in features] label_features = [{"input_ids": feature["labels"]} for feature in features] output = {} batch = self.processor.pad( input_features, padding=self.padding, pad_to_multiple_of=self.pad_to_multiple_of, return_tensors="pt", ) labels_batch = self.processor.pad( label_features, padding=self.padding, pad_to_multiple_of=self.pad_to_multiple_of_labels, return_tensors="pt", ) # replace padding with -100 to ignore loss correctly labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100) output["labels"] = labels output["input_values"] = batch["input_ids"] # output["attention_mask"] = batch["attention_mask"] if "attention_mask" in batch: output["attention_mask"] = batch["attention_mask"].to(torch.long) return output ``` ### Expected behavior Eval loss should not plateau at 2.91. It doesn't reduce any further. Also, what's ideal number of `warmup_steps` that you'd recommend?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24246/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24246/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24245
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24245/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24245/comments
https://api.github.com/repos/huggingface/transformers/issues/24245/events
https://github.com/huggingface/transformers/issues/24245
1,754,783,296
I_kwDOCUB6oc5ol95A
24,245
Qlora on open llama 13b fails
{ "login": "nivibilla", "id": 26687662, "node_id": "MDQ6VXNlcjI2Njg3NjYy", "avatar_url": "https://avatars.githubusercontent.com/u/26687662?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nivibilla", "html_url": "https://github.com/nivibilla", "followers_url": "https://api.github.com/users/nivibilla/followers", "following_url": "https://api.github.com/users/nivibilla/following{/other_user}", "gists_url": "https://api.github.com/users/nivibilla/gists{/gist_id}", "starred_url": "https://api.github.com/users/nivibilla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nivibilla/subscriptions", "organizations_url": "https://api.github.com/users/nivibilla/orgs", "repos_url": "https://api.github.com/users/nivibilla/repos", "events_url": "https://api.github.com/users/nivibilla/events{/privacy}", "received_events_url": "https://api.github.com/users/nivibilla/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @nivibilla, \r\n\r\nPlease make sure to search the issues first, as it's possible they have previously been reported and resolved e.g.: \r\n#24050 \r\n#23935 \r\n\r\nCould you try installing accelerate, peft and transformers from source, and rerunning your script\r\n```\r\npip install git+https://github.com/huggingface/peft.git git+https://github.com/huggingface/transformers.git git+https://github.com/huggingface/accelerate.git\r\n```", "sorry mb, I am already installing from source so Im not sure what went wrong. In any case, will test again and let you know", "I did as you asked @amyeroberts , installed from source. But I still get the same error. ", "```\r\n!pip install -q torch==2.0.1 torchvision torchaudio\r\n!pip install -q -U bitsandbytes\r\n!pip install -q -U git+https://github.com/huggingface/transformers.git\r\n!pip install -q -U git+https://github.com/huggingface/peft.git\r\n!pip install -q -U git+https://github.com/huggingface/accelerate.git\r\n!pip install -q -U git+https://github.com/huggingface/datasets.git\r\n!pip install -q -U einops\r\n!pip install -q -U sentencepiece\r\n```", "Was fixed when I used this particular branch\r\n\r\n```\r\n!pip install git+https://github.com/huggingface/transformers@de9255de27abfcae4a1f816b904915f0b1e23cd9\r\n```\r\n\r\nWill this branch be merged?", "Note I am using 4bit quantisation in training, which may be the cause of the issue as mentioned in #23935", "Another issue I have encountered with the branch I tested is that it doesn't save a adapter_config.json for the checkpoints.", "Update:\r\n\r\nfixed the adapter_config saving issue by\r\n\r\n```\r\nfrom transformers import TrainerCallback\r\nclass PeftSavingCallback(TrainerCallback):\r\n def on_save(self, args, state, control, **kwargs):\r\n checkpoint_path = os.path.join(args.output_dir, f\"checkpoint-{state.global_step}\")\r\n kwargs[\"model\"].save_pretrained(checkpoint_path)\r\n\r\n if \"pytorch_model.bin\" in os.listdir(checkpoint_path):\r\n os.remove(os.path.join(checkpoint_path, \"pytorch_model.bin\"))\r\n```\r\n\r\nHowever the issue still remains when using the normal installation instead of the particular commit mentioned", "> Was fixed when I used this particular branch\r\n\r\nThat's great to hear! Peculiar that it didn't work from source though 🤔 \r\n\r\n> Will this branch be merged?\r\n\r\n[This commit has already been merged](https://github.com/huggingface/transformers/commit/de9255de27abfcae4a1f816b904915f0b1e23cd9), I believe, and is part of the latest release. Could you confirm the version of transformers that was installed when the problem was happening initially? \r\n\r\n> Another issue I have encountered with the branch I tested is that it doesn't save a adapter_config.json for the checkpoints.\r\n\r\nHmmmm.... I have no idea about this cc @pacman100 who knows a lot more about Peft and Trainer :) ", "> Could you confirm the version?\r\n\r\nI did transformers.__version__ and got ```4.31.0.dev0```", "I had this same issue today, always stopped around 1 epoch with the same error. I was trying to fine-tune llama-13b as well, on my own dataset, which I know is correctly formatted.", "Using git source pip install too. Trying `!pip install git+https://github.com/huggingface/transformers@de9255de27abfcae4a1f816b904915f0b1e23cd9` is currently working on the second epoch. Thank you @nivibilla!", "cc @younesbelkada As you've been working on the related issue ", "@richardr1126 are your checkpoints saving properly? I had to write a custom call back as the adapter_config wasn't being written ", "> @richardr1126 are your checkpoints saving properly? I had to write a custom call back as the adapter_config wasn't being written\r\n\r\nYeah, I used your PeftSavingCallback below and added it to the callbacks param in the Trainer. It created the adapter_config and adapter_model and saved them into the `checkpoint-XXX` folder after every save step, which I set to 100. I am using Colab so I downloaded the adapter_model and config to my local computer, then uploaded it to Hugging Face as a LoRA adapter using the Upload files button on the model repo.\r\n\r\n```\r\nfrom trl import SFTTrainer\r\nfrom transformers import TrainerCallback\r\nimport os\r\n\r\nclass PeftSavingCallback(TrainerCallback):\r\n def on_save(self, args, state, control, **kwargs):\r\n checkpoint_path = os.path.join(args.output_dir, f\"checkpoint-{state.global_step}\")\r\n kwargs[\"model\"].save_pretrained(checkpoint_path)\r\n\r\n if \"pytorch_model.bin\" in os.listdir(checkpoint_path):\r\n os.remove(os.path.join(checkpoint_path, \"pytorch_model.bin\"))\r\n\r\ntrainer = SFTTrainer(\r\n model=model,\r\n train_dataset=sql,\r\n peft_config=peft_config,\r\n dataset_text_field=\"text\",\r\n max_seq_length=176,\r\n tokenizer=tokenizer,\r\n args=training_arguments,\r\n callbacks=[PeftSavingCallback]\r\n)\r\n```", "> Interestingly failed at exactly 1 Epoch\r\n\r\nHello @nivibilla, PR #24415 should fix this. Can you confirm the same?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "I think this works. Haven't tested though. Will close for now " ]
1,686
1,689
1,689
NONE
null
### System Info Installed by ```!pip install -q -U git+https://github.com/huggingface/transformers.git``` On databricks ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction ``` import transformers trainer = transformers.Trainer( model=peft_model, train_dataset=data["train"], args=transformers.TrainingArguments( save_steps=250, per_device_train_batch_size=2, gradient_accumulation_steps=8, num_train_epochs=5, # max_steps=5, learning_rate=2e-4, fp16=True, logging_steps=1, output_dir=models[model_name]['folder_name'], optim="paged_adamw_8bit" ), data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False), ) model.config.use_cache = False # silence the warnings. Please re-enable for inference! trainer.train() ``` ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) File <command-412498178049036>:21 3 trainer = transformers.Trainer( 4 model=peft_model, 5 train_dataset=data["train"], (...) 18 data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False), 19 ) 20 model.config.use_cache = False # silence the warnings. Please re-enable for inference! ---> 21 trainer.train() File /local_disk0/.ephemeral_nfs/envs/pythonEnv-35a3008b-a999-41db-a8be-1e0597d78a6b/lib/python3.10/site-packages/transformers/trainer.py:1537, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1532 self.model_wrapped = self.model 1534 inner_training_loop = find_executable_batch_size( 1535 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size 1536 ) -> 1537 return inner_training_loop( 1538 args=args, 1539 resume_from_checkpoint=resume_from_checkpoint, 1540 trial=trial, 1541 ignore_keys_for_eval=ignore_keys_for_eval, 1542 ) File /local_disk0/.ephemeral_nfs/envs/pythonEnv-35a3008b-a999-41db-a8be-1e0597d78a6b/lib/python3.10/site-packages/transformers/trainer.py:1860, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval) 1855 nn.utils.clip_grad_norm_( 1856 amp.master_params(self.optimizer), 1857 args.max_grad_norm, 1858 ) 1859 else: -> 1860 self.accelerator.clip_grad_norm_( 1861 model.parameters(), 1862 args.max_grad_norm, 1863 ) 1865 # Optimizer step 1866 optimizer_was_run = True File /local_disk0/.ephemeral_nfs/envs/pythonEnv-35a3008b-a999-41db-a8be-1e0597d78a6b/lib/python3.10/site-packages/accelerate/accelerator.py:1908, in Accelerator.clip_grad_norm_(self, parameters, max_norm, norm_type) 1904 elif self.distributed_type == DistributedType.DEEPSPEED: 1905 # `accelerator.backward(loss)` is doing that automatically. Therefore, its implementation is not needed 1906 # We cannot return the gradient norm because DeepSpeed does it. 1907 return None -> 1908 self.unscale_gradients() 1909 return torch.nn.utils.clip_grad_norm_(parameters, max_norm, norm_type=norm_type) File /local_disk0/.ephemeral_nfs/envs/pythonEnv-35a3008b-a999-41db-a8be-1e0597d78a6b/lib/python3.10/site-packages/accelerate/accelerator.py:1871, in Accelerator.unscale_gradients(self, optimizer) 1869 while isinstance(opt, AcceleratedOptimizer): 1870 opt = opt.optimizer -> 1871 self.scaler.unscale_(opt) File /local_disk0/.ephemeral_nfs/envs/pythonEnv-35a3008b-a999-41db-a8be-1e0597d78a6b/lib/python3.10/site-packages/torch/cuda/amp/grad_scaler.py:275, in GradScaler.unscale_(self, optimizer) 272 optimizer_state = self._per_optimizer_states[id(optimizer)] 274 if optimizer_state["stage"] is OptState.UNSCALED: --> 275 raise RuntimeError("unscale_() has already been called on this optimizer since the last update().") 276 elif optimizer_state["stage"] is OptState.STEPPED: 277 raise RuntimeError("unscale_() is being called after step().") RuntimeError: unscale_() has already been called on this optimizer since the last update(). ``` Interestingly failed at exactly 1 Epoch ### Expected behavior Run normally?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24245/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24245/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24244
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24244/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24244/comments
https://api.github.com/repos/huggingface/transformers/issues/24244/events
https://github.com/huggingface/transformers/issues/24244
1,754,745,518
I_kwDOCUB6oc5ol0qu
24,244
Make classifier backbone dynamic maskformer, mask2former
{ "login": "tanzzilaalam", "id": 69534473, "node_id": "MDQ6VXNlcjY5NTM0NDcz", "avatar_url": "https://avatars.githubusercontent.com/u/69534473?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tanzzilaalam", "html_url": "https://github.com/tanzzilaalam", "followers_url": "https://api.github.com/users/tanzzilaalam/followers", "following_url": "https://api.github.com/users/tanzzilaalam/following{/other_user}", "gists_url": "https://api.github.com/users/tanzzilaalam/gists{/gist_id}", "starred_url": "https://api.github.com/users/tanzzilaalam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tanzzilaalam/subscriptions", "organizations_url": "https://api.github.com/users/tanzzilaalam/orgs", "repos_url": "https://api.github.com/users/tanzzilaalam/repos", "events_url": "https://api.github.com/users/tanzzilaalam/events{/privacy}", "received_events_url": "https://api.github.com/users/tanzzilaalam/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
closed
false
null
[]
[ "Thank you for solving this issue. But there is still problem.\r\nThe supported backbones variable hinders to allow any backbones other than\r\nswin.\r\nPlease check this line. I had to write child class and had to modify this\r\nline to solve this problem.\r\n\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/mask2former/configuration_mask2former.py#L124\r\n\r\n\r\nOn Tue, 27 Jun 2023, 11:34 pm Sylvain Gugger, ***@***.***>\r\nwrote:\r\n\r\n> Closed #24244 <https://github.com/huggingface/transformers/issues/24244>\r\n> as completed via #24259\r\n> <https://github.com/huggingface/transformers/pull/24259>.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/24244#event-9655252543>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AQSQGCPK433TDSJZKGBL7VDXNMKRDANCNFSM6AAAAAAZEYRWOA>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n", "cc @amyeroberts ", "Hi @tanzzilaalam, you're completely right. I've opened a PR - #24532 - which should resolve this and allow you to pass in any `backbone_config`. " ]
1,686
1,687
1,687
NONE
null
### Feature request Make classifier backbone dynamic maskformer, mask2former. Currently it only supports swin transformer. link example: https://github.com/huggingface/transformers/blob/main/src/transformers/models/mask2former/modeling_mask2former.py#L1393 ### Motivation If this feature request successful, it will help to benchmark for new clasifier easily.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24244/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24244/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24243
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24243/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24243/comments
https://api.github.com/repos/huggingface/transformers/issues/24243/events
https://github.com/huggingface/transformers/issues/24243
1,754,733,529
I_kwDOCUB6oc5olxvZ
24,243
facebook\opt layer norm
{ "login": "CompressTeam", "id": 50616467, "node_id": "MDQ6VXNlcjUwNjE2NDY3", "avatar_url": "https://avatars.githubusercontent.com/u/50616467?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CompressTeam", "html_url": "https://github.com/CompressTeam", "followers_url": "https://api.github.com/users/CompressTeam/followers", "following_url": "https://api.github.com/users/CompressTeam/following{/other_user}", "gists_url": "https://api.github.com/users/CompressTeam/gists{/gist_id}", "starred_url": "https://api.github.com/users/CompressTeam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CompressTeam/subscriptions", "organizations_url": "https://api.github.com/users/CompressTeam/orgs", "repos_url": "https://api.github.com/users/CompressTeam/repos", "events_url": "https://api.github.com/users/CompressTeam/events{/privacy}", "received_events_url": "https://api.github.com/users/CompressTeam/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @CompressTeam, thanks for raising this issue!\r\n\r\nI believe this behaviour is likely coming from the fact that the layer norm layers are instantiated with `elementwise_affine=True` e.g. [here](https://github.com/huggingface/transformers/blob/fdd78d91532dffc4b2493d3b9bd9e19aaf78fe6b/src/transformers/models/opt/modeling_opt.py#L292) (as default [config value is `True`](https://github.com/huggingface/transformers/blob/fdd78d91532dffc4b2493d3b9bd9e19aaf78fe6b/src/transformers/models/opt/configuration_opt.py#L120)). This instantiates the layer with [all weight values as 1, and biases as 0](https://pytorch.org/docs/stable/generated/torch.nn.LayerNorm.html). \r\n\r\nPlaying quickly with the snippet provided, I can see that the biases are all different values, so it would seem that either only the biases were updated when training the model, there's been a error in weight conversion or an issue with weight saving. \r\n\r\nI'll hand over to @younesbelkada who added the model as is most familiar with layer norm related logic like `config._remove_final_layer_norm`\r\n\r\n\r\n", "Hi @CompressTeam \r\n\r\nI think that this is expected, see this interesting thread from the authors: https://github.com/huggingface/transformers/issues/17653 and in particular these 2 messages: https://github.com/huggingface/transformers/issues/17653#issuecomment-1163065167 / https://github.com/huggingface/transformers/issues/17653#issuecomment-1163293340 from what I have understood the models somehow learned to get a layer norm of 1 ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,686
1,689
1,689
NONE
null
### System Info transformers version 4.28.1. I notice that in the facebook\optX models the LayerNorm weight is equal to 1 in all layers, means no parameter changed. I checked the sizes 125m, 1.3b, 2.7b, 6.7b, 13b ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import OPTModel import torch model = OPTModel.from_pretrained("facebook/opt-13b") for m in model.modules(): if isinstance(m,torch.nn.LayerNorm): (m.weight == 1).all() ### Expected behavior I get: (Expected to get different values) tensor(True) tensor(True) tensor(True) tensor(True) tensor(True) tensor(True) tensor(True) tensor(True) tensor(True) tensor(True) tensor(True) tensor(True) tensor(True) tensor(True) tensor(True) tensor(True) tensor(True) tensor(True) tensor(True) tensor(True) tensor(True) tensor(True) tensor(True) tensor(True) tensor(True) tensor(True) tensor(True)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24243/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24243/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24242
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24242/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24242/comments
https://api.github.com/repos/huggingface/transformers/issues/24242/events
https://github.com/huggingface/transformers/issues/24242
1,754,723,475
I_kwDOCUB6oc5olvST
24,242
Error in finetuning starcoder with 8 GPU 24GB Memory
{ "login": "22Mukesh22", "id": 68140619, "node_id": "MDQ6VXNlcjY4MTQwNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/68140619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/22Mukesh22", "html_url": "https://github.com/22Mukesh22", "followers_url": "https://api.github.com/users/22Mukesh22/followers", "following_url": "https://api.github.com/users/22Mukesh22/following{/other_user}", "gists_url": "https://api.github.com/users/22Mukesh22/gists{/gist_id}", "starred_url": "https://api.github.com/users/22Mukesh22/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/22Mukesh22/subscriptions", "organizations_url": "https://api.github.com/users/22Mukesh22/orgs", "repos_url": "https://api.github.com/users/22Mukesh22/repos", "events_url": "https://api.github.com/users/22Mukesh22/events{/privacy}", "received_events_url": "https://api.github.com/users/22Mukesh22/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @22Mukesh22 \r\nThanks for the issue, Per my understanding you want to use NPP(Naive Pipeline Parallelism) \r\nFor reference check: https://github.com/huggingface/accelerate/issues/1515#issuecomment-1584515731 and the entire thread\r\nIn this case you should run your script in a non-distributed mode. Please run your script with just `python finetune.py xxxx` and let us know how it goes", "But , with single GPU , it will take lot and lot of time to finish the training , I want to make sure it uses all my GPU , but its not happening .\r\nI will update the same to you , for single GPU ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,686
1,689
1,689
NONE
null
### System Info I am trying finetuning starcoder , with 8 GPU P40 (each 24GB Memory) , I am using "https://github.com/Xirider/finetune-gpt2xl " and also referring the "https://github.com/bigcode-project/starcoder" Facing error with both. The error says : raise ValueError( ValueError: You can't train a model that has been loaded in 8-bit precision on multiple devices in any distributed mode. In order to use 8-bit models that have been loaded across multiple GPUs the solution is to use Naive Pipeline Parallelism. Therefore you should not specify that you are under any distributed regime in your accelerate config. @younesbelkada , please help on the same Thanks!!! ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction /group-volume/orc_srib/mukesh.sm/STARCODER/finetune-gpt2xl/deep/lib/python3.8/site-packages/accelerate/accelerator.py" python -m torch.distributed.launch \ --nproc_per_node 8 finetune.py \ --model_path="bigcode/starcoder"\ --dataset_name="HuggingFaceH4/CodeAlpaca_20K"\ --split="train"\ --size_valid_set 10000\ --streaming \ --seq_length 2048\ --max_steps 1000\ --batch_size 4\ --input_column_name="prompt"\ --output_column_name="completion"\ --gradient_accumulation_steps 16\ --learning_rate 1e-4\ --lr_scheduler_type="cosine"\ --num_warmup_steps 100\ --weight_decay 0.05\ --output_dir="./checkpoints" \ ### Expected behavior I am not able to understand why its not getting finetuned either 8 bit or 4 bit . Because full precision if i try gives out of memory error . Please help with the right steps to finetune the starcoder .
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24242/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24242/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24241
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24241/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24241/comments
https://api.github.com/repos/huggingface/transformers/issues/24241/events
https://github.com/huggingface/transformers/pull/24241
1,754,721,329
PR_kwDOCUB6oc5S4NCD
24,241
Safely import pytest in testing_utils.py
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
COLLABORATOR
null
# What does this PR do? After merging in #23271, hitting `TAB` for autocompleting: `from transformers.` in an ipython session results in a runtime error. It appears that this is because `_pytest` and `pytest` are imported in `testing_utils.py`. Although there's no direct imports of `transformers.testing_utils.py`, it seems this module read when using autocomplete. Fixes #24227 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24241/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24241/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24241", "html_url": "https://github.com/huggingface/transformers/pull/24241", "diff_url": "https://github.com/huggingface/transformers/pull/24241.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24241.patch", "merged_at": 1686662888000 }
https://api.github.com/repos/huggingface/transformers/issues/24240
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24240/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24240/comments
https://api.github.com/repos/huggingface/transformers/issues/24240/events
https://github.com/huggingface/transformers/issues/24240
1,754,687,694
I_kwDOCUB6oc5olmjO
24,240
Tensorflow and Torch yield significantly different results for same model
{ "login": "DavidHuebner", "id": 14200897, "node_id": "MDQ6VXNlcjE0MjAwODk3", "avatar_url": "https://avatars.githubusercontent.com/u/14200897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DavidHuebner", "html_url": "https://github.com/DavidHuebner", "followers_url": "https://api.github.com/users/DavidHuebner/followers", "following_url": "https://api.github.com/users/DavidHuebner/following{/other_user}", "gists_url": "https://api.github.com/users/DavidHuebner/gists{/gist_id}", "starred_url": "https://api.github.com/users/DavidHuebner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DavidHuebner/subscriptions", "organizations_url": "https://api.github.com/users/DavidHuebner/orgs", "repos_url": "https://api.github.com/users/DavidHuebner/repos", "events_url": "https://api.github.com/users/DavidHuebner/events{/privacy}", "received_events_url": "https://api.github.com/users/DavidHuebner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @DavidHuebner, this is a known issue, and it's unfortunately unavoidable! Floating point calculations are inherently imprecise, which means that changing the specific kernels you use and the order of operations will slightly alter the result. Because TF uses different kernels to Torch, and because TF compiles models by default (which means that some operations may be fused or reordered), there will always be a numerical difference in their outputs. In general, we find that the final model outputs (e.g. token logits for `distilbert`) remain relatively similar, even though hidden states can vary by ~1e-3 or even ~1e-2 in some cases.\r\n\r\nIf you want to reduce the error, one useful tip is that TensorFlow enables TensorFloat-32 computation by default (which increases speed but reduces precision on newer GPUs), but Torch does not. Adding the line `tf.config.experimental.enable_tensor_float_32_execution(False)` to your TF code may make the results more similar to the Torch outputs, but even with that I suspect the error will be in the range 1e-4 to 1e-5, and reducing the error to ~1e-7 for any real model is probably impossible!", "Alright. Thanks for the response and explanation." ]
1,686
1,686
1,686
NONE
null
### System Info - `transformers` version: 4.20.1 - Platform: Linux-4.18.0-425.3.1.el8.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.5 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.13.0 (True) - Tensorflow version (GPU?): 2.7.4 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: True - Using distributed or parallel set-up in script?: False ### Who can help? I am trying to convert a PyTorch transformer model to TensorFlow. When comparing the model outputs, I observe significant differences in all output values. To reproduce, I compared the model output for the same model (e.g. `distilbert-base-uncased`) once loaded with Tensorflow (2.7.4) and once with torch (1.13.0), see script below. Results are very different. - Is this an expected outcome? - Are there any strategies to mitigate these differences? @sgugger @Rocketknight1 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Execute the following Python code. ```py import numpy as np import tensorflow as tf from transformers import TFAutoModel, AutoTokenizer, AutoModel import torch model_path = "distilbert-base-uncased" tf_model = TFAutoModel.from_pretrained(model_path) # The same problems occur with from_pt=True pt_model = AutoModel.from_pretrained(model_path) tokenizer = AutoTokenizer.from_pretrained(model_path) payload = ["This is a great sentence embedding"] encoded_input_tf = tokenizer(payload, return_tensors='tf') encoded_input_pt= tokenizer(payload, return_tensors='pt') tf_output = tf_model(**encoded_input_tf) with torch.no_grad(): pt_output = pt_model(**encoded_input_pt) np.testing.assert_allclose(pt_output.last_hidden_state, tf_output.last_hidden_state) ``` yields ``` AssertionError: Not equal to tolerance rtol=1e-07, atol=0 Mismatched elements: 7680 / 7680 (100%) Max absolute difference: 0.00231361 Max relative difference: 4.5563045 x: array([[[-0.259016, -0.081947, -0.052371, ..., -0.021481, 0.184779, 0.368155], [-0.412638, -0.111859, -0.17412 , ..., -0.204094, 0.356639,... y: array([[[-0.25897 , -0.081805, -0.052341, ..., -0.021413, 0.185112, 0.367904], [-0.412646, -0.111567, -0.174027, ..., -0.204349, 0.356727,... ``` ### Expected behavior Both models outputs should be similar. The `np.testing.assert_allclose()` should not raise an `AssertionError`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24240/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24240/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24239
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24239/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24239/comments
https://api.github.com/repos/huggingface/transformers/issues/24239/events
https://github.com/huggingface/transformers/pull/24239
1,754,682,769
PR_kwDOCUB6oc5S4Ee2
24,239
deprecate `use_mps_device`
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
CONTRIBUTOR
null
# What does this PR do? 1. deprecate `use_mps_device`. `mps` device will be used by default if available similar to the way `cuda` device is used. Therefore, no action from user is required.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24239/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24239/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24239", "html_url": "https://github.com/huggingface/transformers/pull/24239", "diff_url": "https://github.com/huggingface/transformers/pull/24239.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24239.patch", "merged_at": 1686665916000 }
https://api.github.com/repos/huggingface/transformers/issues/24238
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24238/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24238/comments
https://api.github.com/repos/huggingface/transformers/issues/24238/events
https://github.com/huggingface/transformers/pull/24238
1,754,644,156
PR_kwDOCUB6oc5S375L
24,238
Generate: GenerationConfig can overwrite attributes at from_pretrained time
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
MEMBER
null
# What does this PR do? Fixes #24104 As with other configuration files, we should allow overwriting attributes at `from_pretrained` time. The latest change to the `GenerationConfig` loading code disallowed it -- this PR fixes it and adds a test.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24238/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24238/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24238", "html_url": "https://github.com/huggingface/transformers/pull/24238", "diff_url": "https://github.com/huggingface/transformers/pull/24238.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24238.patch", "merged_at": 1686675561000 }
https://api.github.com/repos/huggingface/transformers/issues/24237
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24237/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24237/comments
https://api.github.com/repos/huggingface/transformers/issues/24237/events
https://github.com/huggingface/transformers/pull/24237
1,754,643,700
PR_kwDOCUB6oc5S37yx
24,237
[Time Series] use mean scaler when scaling is a boolean True
{ "login": "kashif", "id": 8100, "node_id": "MDQ6VXNlcjgxMDA=", "avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kashif", "html_url": "https://github.com/kashif", "followers_url": "https://api.github.com/users/kashif/followers", "following_url": "https://api.github.com/users/kashif/following{/other_user}", "gists_url": "https://api.github.com/users/kashif/gists{/gist_id}", "starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kashif/subscriptions", "organizations_url": "https://api.github.com/users/kashif/orgs", "repos_url": "https://api.github.com/users/kashif/repos", "events_url": "https://api.github.com/users/kashif/events{/privacy}", "received_events_url": "https://api.github.com/users/kashif/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "thank you!" ]
1,686
1,686
1,686
CONTRIBUTOR
null
# What does this PR do? Use the mean scaler when `scaling` is `True` or `"mean"`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24237/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24237/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24237", "html_url": "https://github.com/huggingface/transformers/pull/24237", "diff_url": "https://github.com/huggingface/transformers/pull/24237.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24237.patch", "merged_at": 1686674765000 }
https://api.github.com/repos/huggingface/transformers/issues/24236
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24236/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24236/comments
https://api.github.com/repos/huggingface/transformers/issues/24236/events
https://github.com/huggingface/transformers/issues/24236
1,754,585,175
I_kwDOCUB6oc5olNhX
24,236
Use `accelerate` with transformers==4.26.1
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You'd need to recreate everything that has been performed throughout this integration, which is quite a lot. Distributed training still works in the Trainer natively, we just replaced its guts with Accelerate. The only real main difference is the DataLoader setup is a bit different in how it handles the batches. However in terms of usage nothing truly has changed. So they should update their version to v4.30.1 if you'd like this.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,686
1,689
1,689
CONTRIBUTOR
null
### Feature request How can we use `accelerate` features with an older version of `transformers`? I'm asking this because I've to use `adapter-transformers` and that is based on `transformers v4.26.1` ### Motivation I used it latest `transformers` version and found it really helpful for parallelising the training ### Your contribution It's more of an optimization thing for an older version, I'm not sure how helpful will it be for other people or if any PR will be required for this. Tagging @muellerzr here as he has worked on this perviously. Thank you!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24236/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24236/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24235
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24235/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24235/comments
https://api.github.com/repos/huggingface/transformers/issues/24235/events
https://github.com/huggingface/transformers/issues/24235
1,754,473,563
I_kwDOCUB6oc5okyRb
24,235
Add SPTSv2
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "Hi @NielsRogge can I please contribute this model? " ]
1,686
1,686
null
CONTRIBUTOR
null
### Model description SPTSv2 is the latest SOTA text spotting model from Bytedance. Given that we already support DETR, should be a breeze to support this model as well. SPTSv2 is an improvement over the first version: https://github.com/shannanyinxiang/SPTS. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation https://github.com/bytedance/SPTSv2
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24235/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24235/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/24234
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24234/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24234/comments
https://api.github.com/repos/huggingface/transformers/issues/24234/events
https://github.com/huggingface/transformers/pull/24234
1,754,442,624
PR_kwDOCUB6oc5S3PTs
24,234
Adapt Wav2Vec2 conversion for MMS lang identification
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24234). All of your documentation changes will be reflected on that endpoint." ]
1,686
1,686
1,686
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Adds conversion code for MMS - Language Identification models: https://huggingface.co/models?other=mms&sort=downloads&search=lid Source: https://github.com/facebookresearch/fairseq/tree/main/examples/mms#tts ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24234/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24234/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24234", "html_url": "https://github.com/huggingface/transformers/pull/24234", "diff_url": "https://github.com/huggingface/transformers/pull/24234.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24234.patch", "merged_at": 1686751356000 }
https://api.github.com/repos/huggingface/transformers/issues/24233
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24233/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24233/comments
https://api.github.com/repos/huggingface/transformers/issues/24233/events
https://github.com/huggingface/transformers/issues/24233
1,754,427,609
I_kwDOCUB6oc5oknDZ
24,233
Auto-Converted Fast Tokenizer Producing Incorrect Results
{ "login": "young-geng", "id": 5175395, "node_id": "MDQ6VXNlcjUxNzUzOTU=", "avatar_url": "https://avatars.githubusercontent.com/u/5175395?v=4", "gravatar_id": "", "url": "https://api.github.com/users/young-geng", "html_url": "https://github.com/young-geng", "followers_url": "https://api.github.com/users/young-geng/followers", "following_url": "https://api.github.com/users/young-geng/following{/other_user}", "gists_url": "https://api.github.com/users/young-geng/gists{/gist_id}", "starred_url": "https://api.github.com/users/young-geng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/young-geng/subscriptions", "organizations_url": "https://api.github.com/users/young-geng/orgs", "repos_url": "https://api.github.com/users/young-geng/repos", "events_url": "https://api.github.com/users/young-geng/events{/privacy}", "received_events_url": "https://api.github.com/users/young-geng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Thanks for reporting. I am investigating this !", "Hi, I have a fix. It also makes the conversion process a lot faster (it is super slow on my machine right now for some reason). Is it ok if I make a PR? \r\n\r\n@young-geng do you have other examples of words that go wrong? I think I've fixed it, but more evidence would also be nice 😸 ", "@stephantul I can dig into it more to find some more examples. Could you tell me why this happens?", "I'm still a bit confused as to the exact cause of the issue. I think it has to do with the way the merges are ordered. I'm now running the slow conversion process, which takes a long time, but the new fast conversion process at least fixes the \"thermal\" example you had above. \r\n\r\nAfter that, I can compare and give you a proper analysis, should be done later today.", "The issue was that your tokenizer has a merge which has a score of 0, which is `_t`. This merge wasn't properly recorded, since the conversion code checked for Falsiness of the merge score, and not whether it existed. \r\n\r\ni.e., it checked `if vocab_score:`, but it should have been checking `if vocab_score is None:`. Because of this, it removed the `_t` as a possible merge, which afflicts `_thermal` and other words starting with lowercase letter `t`.\r\n\r\n", "Great work @stephantul ! Will review your PR to merge it asap! ", "[ArthurZucker](https://github.com/ArthurZucker)\r\n\r\nI have encountered the same inconsistency. Due to various reasons, it is always difficult to use the latest version. Could you please let me know from which version of transformers this issue was updated?", "Hey! This was available in the following releases: [v4.35.2](https://github.com/huggingface/transformers/releases/tag/v4.35.2) [v4.35.1](https://github.com/huggingface/transformers/releases/tag/v4.35.1) [v4.35.0](https://github.com/huggingface/transformers/releases/tag/v4.35.0) [v4.34.1](https://github.com/huggingface/transformers/releases/tag/v4.34.1) [v4.34.0](https://github.com/huggingface/transformers/releases/tag/v4.34.0) [v4.33.3](https://github.com/huggingface/transformers/releases/tag/v4.33.3) [v4.33.2](https://github.com/huggingface/transformers/releases/tag/v4.33.2) [v4.33.1](https://github.com/huggingface/transformers/releases/tag/v4.33.1) [v4.33.0](https://github.com/huggingface/transformers/releases/tag/v4.33.0) [v4.32.1](https://github.com/huggingface/transformers/releases/tag/v4.32.1) [v4.32.0](https://github.com/huggingface/transformers/releases/tag/v4.32.0) [v4.31.0](https://github.com/huggingface/transformers/releases/tag/v4.31.0) ", "[ArthurZucker](https://github.com/ArthurZucker)\r\n\r\nThank you for your response. \r\n\r\nIn the case of llama2 tokenizer, I have confirmed that all 8.56 billion tokens in datasets of famous LLMs are tokenized identically in both the fast tokenizer and slow tokenizer even with transformers version `4.31.0`.\r\n\r\n<img width=\"506\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/81407603/36651927-9cd1-486a-b86e-60afe4ed8c89\">\r\n", "Awesome 🚀 " ]
1,686
1,701
1,686
NONE
null
### System Info - `transformers` version: 4.30.1 - Platform: Linux-5.15.107+-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu118 (False) - Tensorflow version (GPU?): 2.12.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.6.9 (cpu) - Jax version: 0.4.10 - JaxLib version: 0.4.10 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The auto-converted fast tokenizer for the LLaMA model sometimes does not produce the same tokenization results as the original sentence piece tokenizer. This is affecting the OpenLLaMA models. Here's the code to reproduce it: ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('openlm-research/open_llama_7b', use_fast=False) fast_tokenizer = AutoTokenizer.from_pretrained('openlm-research/open_llama_7b') text = 'thermal' print(tokenizer.encode(text)) print(fast_tokenizer.encode(text)) ``` The code produces the following output: ``` [1, 14412] [1, 31822, 496, 12719] ``` ### Expected behavior The auto-converted fast tokenizer should produce the exact same tokens as the original sentencepiece tokenizer.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24233/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24233/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24232
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24232/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24232/comments
https://api.github.com/repos/huggingface/transformers/issues/24232/events
https://github.com/huggingface/transformers/pull/24232
1,754,408,166
PR_kwDOCUB6oc5S3HtI
24,232
Improving error message when using `use_safetensors=True`.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
CONTRIBUTOR
null
# What does this PR do? Fixes[ #273](https://github.com/huggingface/safetensors/issues/273) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24232/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24232/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24232", "html_url": "https://github.com/huggingface/transformers/pull/24232", "diff_url": "https://github.com/huggingface/transformers/pull/24232.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24232.patch", "merged_at": 1686661620000 }
https://api.github.com/repos/huggingface/transformers/issues/24231
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24231/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24231/comments
https://api.github.com/repos/huggingface/transformers/issues/24231/events
https://github.com/huggingface/transformers/pull/24231
1,754,395,218
PR_kwDOCUB6oc5S3E8F
24,231
Fix `check_config_attributes`: check all configuration classes
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
COLLABORATOR
null
# What does this PR do? As @NielsRogge points out to me, the `check_config_attributes` doesn't check all configuration classes (to make sure all `__init__` arguments are really used). This is because `CONFIG_MAPPING` doesn't contain all configuration classes, in particular, the vision/text config classes for models like Blip2 or CLIP. This PR fixes this issue, and remove some unused arguments/attributes in some configuration classes.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24231/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24231/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24231", "html_url": "https://github.com/huggingface/transformers/pull/24231", "diff_url": "https://github.com/huggingface/transformers/pull/24231.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24231.patch", "merged_at": 1686821961000 }
https://api.github.com/repos/huggingface/transformers/issues/24230
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24230/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24230/comments
https://api.github.com/repos/huggingface/transformers/issues/24230/events
https://github.com/huggingface/transformers/pull/24230
1,754,385,216
PR_kwDOCUB6oc5S3Cvl
24,230
Fix doc deployment
{ "login": "CarlotaCiruelos", "id": 78039998, "node_id": "MDQ6VXNlcjc4MDM5OTk4", "avatar_url": "https://avatars.githubusercontent.com/u/78039998?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CarlotaCiruelos", "html_url": "https://github.com/CarlotaCiruelos", "followers_url": "https://api.github.com/users/CarlotaCiruelos/followers", "following_url": "https://api.github.com/users/CarlotaCiruelos/following{/other_user}", "gists_url": "https://api.github.com/users/CarlotaCiruelos/gists{/gist_id}", "starred_url": "https://api.github.com/users/CarlotaCiruelos/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CarlotaCiruelos/subscriptions", "organizations_url": "https://api.github.com/users/CarlotaCiruelos/orgs", "repos_url": "https://api.github.com/users/CarlotaCiruelos/repos", "events_url": "https://api.github.com/users/CarlotaCiruelos/events{/privacy}", "received_events_url": "https://api.github.com/users/CarlotaCiruelos/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @CarlotaCiruelos, thanks for opening a PR!\r\n\r\nCould you add some additional information in the PR description describing what issue this is resolving? \r\n\r\nAll the CircleCI tests need to be passing in order for any PR to be ready to merge. For more information on how to make a PR ready, please read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests).", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,686
1,689
1,689
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24230/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24230/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24230", "html_url": "https://github.com/huggingface/transformers/pull/24230", "diff_url": "https://github.com/huggingface/transformers/pull/24230.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24230.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24228
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24228/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24228/comments
https://api.github.com/repos/huggingface/transformers/issues/24228/events
https://github.com/huggingface/transformers/pull/24228
1,754,173,322
PR_kwDOCUB6oc5S2U3h
24,228
QA doc: import torch before it is used
{ "login": "ByronHsu", "id": 24364830, "node_id": "MDQ6VXNlcjI0MzY0ODMw", "avatar_url": "https://avatars.githubusercontent.com/u/24364830?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ByronHsu", "html_url": "https://github.com/ByronHsu", "followers_url": "https://api.github.com/users/ByronHsu/followers", "following_url": "https://api.github.com/users/ByronHsu/following{/other_user}", "gists_url": "https://api.github.com/users/ByronHsu/gists{/gist_id}", "starred_url": "https://api.github.com/users/ByronHsu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ByronHsu/subscriptions", "organizations_url": "https://api.github.com/users/ByronHsu/orgs", "repos_url": "https://api.github.com/users/ByronHsu/repos", "events_url": "https://api.github.com/users/ByronHsu/events{/privacy}", "received_events_url": "https://api.github.com/users/ByronHsu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@amyeroberts @stevhliu mind checking again? thanks!", "@ByronHsu Thanks again for fixing! All looks good 👍 " ]
1,686
1,686
1,686
CONTRIBUTOR
null
# What does this PR do? import torch before it is used in qa doc. Otherwise, it complains import error. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @stevhliu
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24228/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24228/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24228", "html_url": "https://github.com/huggingface/transformers/pull/24228", "diff_url": "https://github.com/huggingface/transformers/pull/24228.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24228.patch", "merged_at": 1686738236000 }
https://api.github.com/repos/huggingface/transformers/issues/24227
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24227/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24227/comments
https://api.github.com/repos/huggingface/transformers/issues/24227/events
https://github.com/huggingface/transformers/issues/24227
1,754,072,625
I_kwDOCUB6oc5ojQYx
24,227
Importing transformers in ipython throws error due to `_pytest`
{ "login": "stephantul", "id": 8882233, "node_id": "MDQ6VXNlcjg4ODIyMzM=", "avatar_url": "https://avatars.githubusercontent.com/u/8882233?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stephantul", "html_url": "https://github.com/stephantul", "followers_url": "https://api.github.com/users/stephantul/followers", "following_url": "https://api.github.com/users/stephantul/following{/other_user}", "gists_url": "https://api.github.com/users/stephantul/gists{/gist_id}", "starred_url": "https://api.github.com/users/stephantul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stephantul/subscriptions", "organizations_url": "https://api.github.com/users/stephantul/orgs", "repos_url": "https://api.github.com/users/stephantul/repos", "events_url": "https://api.github.com/users/stephantul/events{/privacy}", "received_events_url": "https://api.github.com/users/stephantul/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I have exactly same issue with this, how to resolve the issue? ", "I have the same issue with the 4.30.0 version. Try to use the 4.29.0 version.", "We can workaround it with `pip install pytest`", "@gkgkska @xxupiano @xin3he Thanks for reporting this. A fix has now been merged into `main`. ", "Running into a similar error when running `4.31.0.dev0` transformer language modeling example. \r\n\r\n```(base) ubuntu@104-171-202-20:~/llm-training/transformers/examples/pytorch/language-modeling$ python run_clm.py --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --do_train --do_eval --output_dir /tmp/test-clm\r\nTraceback (most recent call last):\r\n File \"/home/ubuntu/llm-training/transformers/examples/pytorch/language-modeling/run_clm.py\", line 51, in <module>\r\n from transformers.testing_utils import CaptureLogger\r\n File \"/home/ubuntu/anaconda3/lib/python3.9/site-packages/transformers/testing_utils.py\", line 109, in <module>\r\n from _pytest.doctest import (\r\nImportError: cannot import name 'Module' from '_pytest.doctest' (/home/ubuntu/anaconda3/lib/python3.9/site-packages/_pytest/doctest.py)```", "Hi @praateekmahajan, could you provide some more information about the issue? Specifically a reproducible set of steps or code and information about the running environment (run `transformers-cli env` in your terminal). ", "> Running into a similar error when running `4.31.0.dev0` transformer language modeling example.\r\n> \r\n> ```\r\n> Traceback (most recent call last):\r\n> File \"/home/ubuntu/llm-training/transformers/examples/pytorch/language-modeling/run_clm.py\", line 51, in <module>\r\n> from transformers.testing_utils import CaptureLogger\r\n> File \"/home/ubuntu/anaconda3/lib/python3.9/site-packages/transformers/testing_utils.py\", line 109, in <module>\r\n> from _pytest.doctest import (\r\n> ImportError: cannot import name 'Module' from '_pytest.doctest' (/home/ubuntu/anaconda3/lib/python3.9/site-packages/_pytest/doctest.py)```\r\n> ```\r\n\r\nI am also seeing this error. I just hacked out the use of the CapturLogger in the example run_clm.\r\n\r\nSince this bug is closed, should we open a different one with this issue?", "@asampat3090 @sei-amellinger \r\n\r\n@amyeroberts 's fix #24241 is merged after the tag `v4.31.0.dev0`. So the first thing to check is to see what the commit you have used to install `transformers`. In any case, you can fetch the latest commit and install it again. It should work fine I think. ", "pip3 uninstall pytest\r\npip3 install pytest\r\n\r\nmaybe pytest version is old", "> maybe pytest version is old\r\n\r\nYeah, it was on old version of pytest.\r\n\r\nThanks!", "Hi team, when will there be a new release that contains the fix from https://github.com/huggingface/transformers/pull/24241?\r\n\r\ncc @sgugger", "The next release will be early next week, probably on Tuesday.", "Thank you!" ]
1,686
1,689
1,686
CONTRIBUTOR
null
### System Info - `transformers` version: 4.30.1 - Platform: macOS-12.5-arm64-arm-64bit - Python version: 3.10.8 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction typing `from transformers.` and then pressing `TAB` (i.e., autocompletion) shows the following stack trace in `ipython`. Aftwards, everything works as expected, although trying to use anything from `transformers.testing_utils` will throw the same error. This happens because `ipython` is doing introspection of the testing module, which then attempts to import the `_pytest` module, which doesn't exist. ``` Traceback (most recent call last): File "my_env/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1084, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/opt/homebrew/Cellar/[email protected]/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1050, in _gcd_import File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 883, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "my_env/lib/python3.10/site-packages/transformers/testing_utils.py", line 39, in <module> from _pytest.doctest import ( ModuleNotFoundError: No module named '_pytest' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "my_env/lib/python3.10/site-packages/IPython/core/completer.py", line 3171, in _complete result = matcher(context) File "my_env/lib/python3.10/site-packages/IPython/core/completer.py", line 2707, in custom_completer_matcher matches = self.dispatch_custom_completer(context.token) or [] File "my_env/lib/python3.10/site-packages/IPython/core/completer.py", line 2747, in dispatch_custom_completer res = c(event) File "my_env/lib/python3.10/site-packages/IPython/core/completerlib.py", line 272, in module_completer return module_completion(event.line) File "my_env/lib/python3.10/site-packages/IPython/core/completerlib.py", line 249, in module_completion completion_list = try_import('.'.join(mod[:-1]), True) File "my_env/lib/python3.10/site-packages/IPython/core/completerlib.py", line 183, in try_import completions.extend( [attr for attr in dir(m) if File "my_env/lib/python3.10/site-packages/IPython/core/completerlib.py", line 184, in <listcomp> is_importable(m, attr, only_modules)]) File "my_env/lib/python3.10/site-packages/IPython/core/completerlib.py", line 153, in is_importable return inspect.ismodule(getattr(module, attr)) File "my_env/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1072, in __getattr__ value = self._get_module(name) File "my_env/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1086, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.testing_utils because of the following error (look up to see its traceback): No module named '_pytest' ``` ### Expected behavior A possible workaround would be to use a try-except statement around this block, which then prints a useful error message.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24227/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24227/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24226
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24226/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24226/comments
https://api.github.com/repos/huggingface/transformers/issues/24226/events
https://github.com/huggingface/transformers/pull/24226
1,754,062,932
PR_kwDOCUB6oc5S19QZ
24,226
remove unused is_decoder parameter in DetrAttention
{ "login": "JayL0321", "id": 31190549, "node_id": "MDQ6VXNlcjMxMTkwNTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/31190549?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JayL0321", "html_url": "https://github.com/JayL0321", "followers_url": "https://api.github.com/users/JayL0321/followers", "following_url": "https://api.github.com/users/JayL0321/following{/other_user}", "gists_url": "https://api.github.com/users/JayL0321/gists{/gist_id}", "starred_url": "https://api.github.com/users/JayL0321/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JayL0321/subscriptions", "organizations_url": "https://api.github.com/users/JayL0321/orgs", "repos_url": "https://api.github.com/users/JayL0321/repos", "events_url": "https://api.github.com/users/JayL0321/events{/privacy}", "received_events_url": "https://api.github.com/users/JayL0321/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,686
1,686
1,686
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #24161 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24226/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24226/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24226", "html_url": "https://github.com/huggingface/transformers/pull/24226", "diff_url": "https://github.com/huggingface/transformers/pull/24226.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24226.patch", "merged_at": 1686825573000 }
https://api.github.com/repos/huggingface/transformers/issues/24225
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24225/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24225/comments
https://api.github.com/repos/huggingface/transformers/issues/24225/events
https://github.com/huggingface/transformers/issues/24225
1,754,056,044
I_kwDOCUB6oc5ojMVs
24,225
Transformers can't detect tensorflow installed in some different path of the environment, even though the path is added to PYTHONPATH
{ "login": "RickSanchezStoic", "id": 57310695, "node_id": "MDQ6VXNlcjU3MzEwNjk1", "avatar_url": "https://avatars.githubusercontent.com/u/57310695?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RickSanchezStoic", "html_url": "https://github.com/RickSanchezStoic", "followers_url": "https://api.github.com/users/RickSanchezStoic/followers", "following_url": "https://api.github.com/users/RickSanchezStoic/following{/other_user}", "gists_url": "https://api.github.com/users/RickSanchezStoic/gists{/gist_id}", "starred_url": "https://api.github.com/users/RickSanchezStoic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RickSanchezStoic/subscriptions", "organizations_url": "https://api.github.com/users/RickSanchezStoic/orgs", "repos_url": "https://api.github.com/users/RickSanchezStoic/repos", "events_url": "https://api.github.com/users/RickSanchezStoic/events{/privacy}", "received_events_url": "https://api.github.com/users/RickSanchezStoic/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @RickSanchezStoic, thanks for raising this issue! \r\n\r\nIt seems this is a bug relating to how we detect TF being in the environment, @Rocketknight1 is opening a PR to resolve. Related issue here: #24253", "Hi @RickSanchezStoic, can you confirm what you mean by \"installed in a different directory\" here? Do you mean that it's not installed as a library, but instead just present as a local directory in the directory you're running your code in?", "@RickSanchezStoic we just merged PR #24255 based on this issue and #24253. However, it might not actually resolve your problem - can you test if the new version fixes your issue? You can install the latest version from main with `pip install --upgrade git+https://github.com/huggingface/transformers.git`", "> Hi @RickSanchezStoic, can you confirm what you mean by \"installed in a different directory\" here? Do you mean that it's not installed as a library, but instead just present as a local directory in the directory you're running your code in?\r\n\r\nIt means when the package is installed elsewhere when you use the `--target` flag with pip to specify a different location.", "> @RickSanchezStoic we just merged PR #24255 based on this issue and #24253. However, it might not actually resolve your problem - can you test if the new version fixes your issue? You can install the latest version from main with `pip install --upgrade git+https://github.com/huggingface/transformers.git`\r\n\r\nSure! will check this out and report here. Thanks!", "> @RickSanchezStoic we just merged PR #24255 based on this issue and #24253. However, it might not actually resolve your problem - can you test if the new version fixes your issue? You can install the latest version from main with `pip install --upgrade git+https://github.com/huggingface/transformers.git`\r\n\r\nThis worked. This is what we needed. Thanks a lot!" ]
1,686
1,686
1,686
NONE
null
Transformers is not able to detect tensorflow installed in different directory. Steps to reproduce, execute the commands in order: `docker pull unifyai/ivy:latest` `docker run --rm -it unifyai/ivy:latest` `pip install transformers` `python` `from transformers import TFDeiTModel` `model = TFDeiTModel.from_pretrained("facebook/deit-base-distilled-patch16-224")`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24225/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24225/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24224
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24224/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24224/comments
https://api.github.com/repos/huggingface/transformers/issues/24224/events
https://github.com/huggingface/transformers/pull/24224
1,753,993,860
PR_kwDOCUB6oc5S1uWt
24,224
Fix LLaMa beam search when using parallelize
{ "login": "FeiWang96", "id": 19998174, "node_id": "MDQ6VXNlcjE5OTk4MTc0", "avatar_url": "https://avatars.githubusercontent.com/u/19998174?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FeiWang96", "html_url": "https://github.com/FeiWang96", "followers_url": "https://api.github.com/users/FeiWang96/followers", "following_url": "https://api.github.com/users/FeiWang96/following{/other_user}", "gists_url": "https://api.github.com/users/FeiWang96/gists{/gist_id}", "starred_url": "https://api.github.com/users/FeiWang96/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FeiWang96/subscriptions", "organizations_url": "https://api.github.com/users/FeiWang96/orgs", "repos_url": "https://api.github.com/users/FeiWang96/repos", "events_url": "https://api.github.com/users/FeiWang96/events{/privacy}", "received_events_url": "https://api.github.com/users/FeiWang96/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@FeiWang96 The quality checks are currently failing. To resolve these, run `make style` at the top level of the repo and commit any changes made. ", "Hi @amyeroberts , thank you for approving. I've fixed the code format issue. However, the ci failed on other tests due to some network issues. I don't have the permission to rerun.", "@FeiWang96 Re-ran and all passing now. Thanks again! " ]
1,686
1,686
1,686
CONTRIBUTOR
null
# What does this PR do? This PR fixes a crash when running beam search on LLaMa on multiple GPUs. Similar issue is also observed and fixed on T5 #11717 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24224/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24224/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24224", "html_url": "https://github.com/huggingface/transformers/pull/24224", "diff_url": "https://github.com/huggingface/transformers/pull/24224.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24224.patch", "merged_at": 1686824928000 }
https://api.github.com/repos/huggingface/transformers/issues/24223
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24223/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24223/comments
https://api.github.com/repos/huggingface/transformers/issues/24223/events
https://github.com/huggingface/transformers/issues/24223
1,753,804,832
I_kwDOCUB6oc5oiPAg
24,223
MMS: target_lang=fra in pipeline() leads to "Size mismatch for lm_head.weight/bias when loading state_dict for Wav2Vec2ForCTC"
{ "login": "erickedji", "id": 52732, "node_id": "MDQ6VXNlcjUyNzMy", "avatar_url": "https://avatars.githubusercontent.com/u/52732?v=4", "gravatar_id": "", "url": "https://api.github.com/users/erickedji", "html_url": "https://github.com/erickedji", "followers_url": "https://api.github.com/users/erickedji/followers", "following_url": "https://api.github.com/users/erickedji/following{/other_user}", "gists_url": "https://api.github.com/users/erickedji/gists{/gist_id}", "starred_url": "https://api.github.com/users/erickedji/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/erickedji/subscriptions", "organizations_url": "https://api.github.com/users/erickedji/orgs", "repos_url": "https://api.github.com/users/erickedji/repos", "events_url": "https://api.github.com/users/erickedji/events{/privacy}", "received_events_url": "https://api.github.com/users/erickedji/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @erickedji,\r\n\r\nWe were indeed missing docs here, I've added them in #24292 . \r\n\r\nHowever, the way you used the pipeline is 100% correct! For some reason **\"facebook/mms-1b-l1107\"** doesn't perform very well. However, **\"facebook/mms-1b-all\"** works well.\r\n\r\n```py\r\nfrom transformers import pipeline\r\n\r\nmodel_id = \"facebook/mms-1b-all\"\r\npipe = pipeline(model=model_id, model_kwargs={\"target_lang\":\"fra\", \"ignore_mismatched_sizes\":True})\r\n\r\nprint(pipe(\"http://french.voiceoversamples.com/jeanNL.mp3\"))\r\n```\r\ngives\r\n```\r\n{'text': \"la première fois que vous allez ouvrir une interaction client vous serait dirigée vers la page d'identification il s'agit du mode par défaut utilisé pour toutes les interactions clients veuillez vérifier le numéro de sécurité sociale de l'appelant avant de poursuivre une fois après avoir confirmé cliqué sur le bouton suivant comme ceci très bien passons maintenant à l'étape dex\"}\r\n```\r\n\r\nand **\"facebook/mms-1b-fl102\"** also seems to have problems.\r\n\r\n```py\r\nfrom transformers import pipeline\r\n\r\nmodel_id = \"facebook/mms-1b-fl102\"\r\npipe = pipeline(model=model_id, model_kwargs={\"target_lang\":\"fra\", \"ignore_mismatched_sizes\":True})\r\n\r\nprint(pipe(\"http://french.voiceoversamples.com/jeanNL.mp3\"))\r\n```\r\ngives\r\n```\r\n{'text': \"la première fois que vous alez ouvrir une interaction client vous seraen dirigée vers la page d'identification il s’agit du mode par des fauts utilisé pour toutes les interactions clients veuillez vérifier le numéro de sécurité sociale de l'appelan avant de poursuivre une fois après avoir confirmé clicque sur le bouton suivant comme ceci très bien passons maintenant à l’étape d\"}\r\n```\r\n\r\ncc @vineelpratap it's a bit surprising that the fl102 model perfoms worse than the `\"all\"` model here no? Maybe I've made an error with the weight conversion? Could you maybe check what the original checkpoint & code gives for pure CTC for `\"http://french.voiceoversamples.com/jeanNL.mp3\"` ?\r\n\r\n@Vaibhavs10 we also should run some evals on the whole FLEURS dataset to be sure.", "Hi, for the above sample - I get this result with FL102 models and using greedy decoding. I converted `.mp3` to `.wav` using this command `ffmpeg -y -i audio.mp3 -ar 16000 -ac 1 audio.wav`\r\n\r\n```\r\nla première fois que vous allez ouvrir une interaction client vous seraet dirigée vers la page d'identification il s’agit du mode par des fauts utilisé pour toutes les interactions clients. veuillez vérifier le numéro de sécurité sociale de l'appelan avant de poursuivre. une fois après avoir confirmé clique sur le bouton suivant comme ceci très bien passans maintenant à l’étape 2\r\n```\r\nIs it possible that we used incorrect dictionary ? \r\n\r\n> it's a bit surprising that the fl102 model perfoms worse than the \"all\" model here no?\r\n\r\nNote that MMS-FL102 model is trained only on FLEURS data, which consists of about 10 hours of data per language while while MMS-ALL model is trained on combining MLS, Common Voice, FLEURS etc. So, it is expected that the performance of MMS-ALL model is better than MMS-FL102. \r\n\r\nMMS-FL102, MMS-FL1107 were open sourced so that one can reproduce some of the results in the paper. If you care about the best performing ASR model, using MMS-ALL model would be the best choice. Running LM decoding will further boost performance as we discuss in the MMS paper, and we are working on open sourcing the LMs soon. \r\n", "Thanks you both for the clarifications!", "@patrickvonplaten - do you know why there is a discrepancy in the output of FL102 models from `fairseq` and `transformer` models for the above audio sample in French. It would be good to figure out the underlying issue. \r\n\r\n\r\n", "I don't know currently, I'll try to look into it over the weekend", "There was indeed a bug! What is happening here is the following.\r\n\r\n1.) By specifying `target_lang` in the constructor method of the pipeline, it is passed to the constructor method of `from_pretrained` of the model, which means inside the `pipeline(...)` function this is called:\r\n```py\r\nmodel_id = \"facebook/mms-1b-fl102\"\r\n\r\nmodel = Wav2Vec2ForCTC.from_pretrained(model_id, target_lang=\"fra\")\r\n```\r\n\r\n2.) Now by passing `target_lang=\"fra\"` however we load the french adapter weights here: https://github.com/huggingface/transformers/blob/ee88ae59940fd4b2c8fc119373143d7a1175c651/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1880\r\nin the init method of the model.\r\n\r\n3.) **However** the init method is run before `from_prertained(...)` loads the state dict into the model. This means the correctly loaded French adapter layers are later again overwritten by the original English adapter layers (the default in the state dict).\r\nThis was sadly not noticed in the tests because English adapter weights work surprisingly well for French :sweat_smile: \r\n\r\n=> A quick'n'dirty fix for users to get the exact same results as posted by @vineelpratap [here](https://github.com/huggingface/transformers/issues/24223#issuecomment-1593812212), is running the following code:\r\n```py\r\nfrom transformers import pipeline\r\n\r\nmodel_id = \"facebook/mms-1b-all\"\r\npipe = pipeline(model=model_id, model_kwargs={\"target_lang\":\"fra\", \"ignore_mismatched_sizes\":True})\r\npipe.model.load_adapter(\"fra\") # THIS CORRECTS THE INCORRECTLY OVERWRITTEN WEIHGTS!\r\n\r\nprint(pipe(\"http://french.voiceoversamples.com/jeanNL.mp3\"))\r\n```\r\n\r\ngives:\r\n```\r\nla première fois que vous allez ouvrir une interaction client vous seraet dirigée vers la page d'identification il s’agit du mode par des fauts utilisé pour toutes les interactions clients. veuillez vérifier le numéro de sécurité sociale de l'appelan avant de poursuivre. une fois après avoir confirmé clique sur le bouton suivant comme ceci très bien passans maintenant à l’étape 2\r\n```\r\n\r\n**Also this is not a problem for the original demo because the original demo only make use of `load_adapter` after having called `from_pretrained` which solves this problem.**\r\n", "For now this is a hack we have to do, but this PR: https://github.com/huggingface/transformers/pull/24335 should solve it nicely.", "I want to use lm in the decoder but I can seem to get it right. Do you know how to use the language model https://huggingface.co/facebook/mms-cclms?\r\n\r\n I get ClientError: Not Found for url: https://huggingface.co/facebook/mms-cclms/resolve/main/config.json. \r\n\r\nI checked the link and there is no instruction or and example to follow from.", "Hey @shdh1995 - would you mind opening a new issue to track this with a reproducible code snippet? Note that this file might assist you in setting up in the meantime: https://huggingface.co/spaces/mms-meta/MMS/blob/main/asr.py", "Running the [code snippet in the docs](https://huggingface.co/docs/transformers/model_doc/mms#loading) results in this exact issue. As shown in the below colab logs, I am installing from source. Has this problem been resolved and I am missing something? 👀 \r\n\r\n![image](https://github.com/huggingface/transformers/assets/26504141/2a74f17e-1991-4920-8cf2-08d89192d4ef)\r\n\r\n```py\r\nfrom transformers import pipeline\r\n\r\nmodel_id = \"facebook/mms-1b-all\"\r\ntarget_lang = \"fra\"\r\n\r\npipe = pipeline(model=model_id, model_kwargs={\"target_lang\": \"fra\", \"ignore_mismatched_sizes\": True})\r\n```\r\n\r\nThe following code also results in the same error:\r\n```py\r\nfrom transformers import Wav2Vec2ForCTC, AutoProcessor\r\n\r\nmodel_id = \"facebook/mms-1b-all\"\r\ntarget_lang = \"fra\"\r\n\r\nprocessor = AutoProcessor.from_pretrained(model_id, target_lang=target_lang)\r\nmodel = Wav2Vec2ForCTC.from_pretrained(model_id, target_lang=target_lang, ignore_mismatched_sizes=True)\r\n```\r\n\r\n![image](https://github.com/huggingface/transformers/assets/26504141/4e5b93d4-1374-4205-adac-3454095d7d4e)\r\n", "Thanks for the ping here @Vaibhavs10 and sorry about this issue going a bit unnoticed. Could you try again with current \"main\" after https://github.com/huggingface/transformers/pull/25267 @erickedji ?", "@patrickvonplaten @xenova The only difference seems to be the `pip install`, and I don't see why it leads to a different behavior.\r\n\r\nI just tried again by running the first and last cells here : https://colab.research.google.com/drive/1YmABsYKCk39Z7GF390G316ZsEWH7GkqT?usp=sharing\r\n\r\nIt worked. The above notebook basically does `!pip install git+https://github.com/huggingface/transformers datasets[torch]`, then:\r\n\r\n```python\r\nfrom transformers import pipeline\r\nmodel_id = \"facebook/mms-1b-all\"\r\npipe = pipeline(model=model_id, model_kwargs={\"target_lang\":\"fra\", \"ignore_mismatched_sizes\":True})\r\noutput = pipe(\"http://french.voiceoversamples.com/jeanNL.mp3\")\r\noutput\r\n```\r\n\r\nI'm not familiar enough with `pip` to comment.\r\n@xenova Can you try with the same pip call as my notebook?", "That's right 👍 you are installing from source (which includes [the latest fix](https://github.com/huggingface/transformers/pull/25267)).", "Oh, nevermind ;)", "See this guide for details on installing from main (or source) @erickedji: https://huggingface.co/docs/transformers/installation#install-from-source" ]
1,686
1,691
1,686
NONE
null
### System Info - `transformers` version: 4.31.0.dev0 - Platform: Linux-5.15.107+-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu) - Jax version: 0.4.10 - JaxLib version: 0.4.10 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @patrickvonplaten @sanchit-gandhi ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Colab link: https://colab.research.google.com/drive/1YmABsYKCk39Z7GF390G316ZsEWH7GkqT?usp=sharing ### Expected behavior ```pipe = pipeline(model="facebook/mms-1b-l1107", model_kwargs={"target_lang":"fra"})``` I expected this to create the pipeline with the `fra` adapter loaded, as seems to be intended [here](https://github.com/huggingface/transformers/commit/5dfd407b37ac683dc91637e9913b0ae9205d2acd#diff-fde96a141d70737bff942cb61341f3b4b87729c9a066ecee4bfc86dfe590a8e6R1864). It fails with a size mismatch issue. Ignoring it seems to load the english adapter instead, as the result is poor and doesn’t match the demo on the official space (https://huggingface.co/spaces/facebook/MMS).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24223/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24223/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24222
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24222/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24222/comments
https://api.github.com/repos/huggingface/transformers/issues/24222/events
https://github.com/huggingface/transformers/pull/24222
1,753,645,178
PR_kwDOCUB6oc5S0i1m
24,222
Bump transformers from 4.26.1 to 4.30.0 in /examples/tensorflow/language-modeling-tpu
{ "login": "dependabot[bot]", "id": 49699333, "node_id": "MDM6Qm90NDk2OTkzMzM=", "avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dependabot%5Bbot%5D", "html_url": "https://github.com/apps/dependabot", "followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers", "following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}", "gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}", "starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions", "organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs", "repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos", "events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}", "received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events", "type": "Bot", "site_admin": false }
[ { "id": 1905493434, "node_id": "MDU6TGFiZWwxOTA1NDkzNDM0", "url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies", "name": "dependencies", "color": "0366d6", "default": false, "description": "Pull requests that update a dependency file" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting `@dependabot ignore this major version` or `@dependabot ignore this minor version`.\n\nIf you change your mind, just re-open this PR and I'll resolve any conflicts on it.", "@dependabot ignore this major version", "OK, I won't notify you about version 4.x.x again, unless you re-open this PR. 😢" ]
1,686
1,686
1,686
CONTRIBUTOR
null
Bumps [transformers](https://github.com/huggingface/transformers) from 4.26.1 to 4.30.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/huggingface/transformers/releases">transformers's releases</a>.</em></p> <blockquote> <h2>v4.30.0: 100k, Agents improvements, Safetensors core dependency, Swiftformer, Autoformer, MobileViTv2, timm-as-a-backbone</h2> <h2>100k</h2> <p>Transformers has just reached 100k stars on GitHub, and to celebrate we wanted to highlight 100 projects in the vicinity of <code>transformers</code> and we have decided to create an <a href="https://github.com/huggingface/transformers/blob/main/awesome-transformers.md">awesome-transformers</a> page to do just that.</p> <p>We accept PRs to add projects to the list!</p> <ul> <li>Top 100 by <a href="https://github.com/LysandreJik"><code>@​LysandreJik</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22912">#22912</a></li> <li>Add LlamaIndex to awesome-transformers.md by <a href="https://github.com/ravi03071991"><code>@​ravi03071991</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23484">#23484</a></li> <li>add cleanlab to awesome-transformers tools list by <a href="https://github.com/jwmueller"><code>@​jwmueller</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23440">#23440</a></li> </ul> <h2>4-bit quantization and QLoRA</h2> <p>By leveraging the <code>bitsandbytes</code> library by <a href="https://github.com/TimDettmers"><code>@​TimDettmers</code></a>, we add 4-bit support to <code>transformers</code> models!</p> <ul> <li>4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) by <a href="https://github.com/TimDettmers"><code>@​TimDettmers</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23479">#23479</a></li> </ul> <h2>Agents</h2> <p>The Agents framework has been improved and continues to be stabilized. Among bug fixes, here are the important new features that were added:</p> <ul> <li>Local agent capabilities, to load a generative model directly from <code>transformers</code> instead of relying on APIs.</li> <li>Prompts are now hosted on the Hub, which means that anyone can fork the prompts and update them with theirs, to let other community contributors re-use them</li> <li>We add an <code>AzureOpenAiAgent</code> class to support Azure OpenAI agents.</li> </ul> <ul> <li>Add local agent by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23438">#23438</a></li> <li>Enable prompts on the Hub by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23662">#23662</a></li> <li>Add AzureOpenAiAgent by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a></li> </ul> <h2>Safetensors</h2> <p>The <code>safetensors</code> library is a safe serialization framework for machine learning tensors. It has been audited and will become the default serialization framework for several organizations (Hugging Face, EleutherAI, Stability AI).</p> <p>It has now become a core dependency of <code>transformers</code>.</p> <ul> <li>Making <code>safetensors</code> a core dependency. by <a href="https://github.com/Narsil"><code>@​Narsil</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23254">#23254</a></li> </ul> <h2>New models</h2> <h3>Swiftformer</h3> <p>The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called ‘SwiftFormer’ is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2.</p> <ul> <li>Add swiftformer by <a href="https://github.com/shehanmunasinghe"><code>@​shehanmunasinghe</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22686">#22686</a></li> </ul> <h3>Autoformer</h3> <p>This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.</p> <ul> <li>[Time-Series] Autoformer model by <a href="https://github.com/elisim"><code>@​elisim</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/21891">#21891</a></li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/huggingface/transformers/commit/fe861e578f50dc9c06de33cd361d2f625017e624"><code>fe861e5</code></a> [<code>GPT2</code>] Add correct keys on <code>_keys_to_ignore_on_load_unexpected</code> on all chil...</li> <li><a href="https://github.com/huggingface/transformers/commit/b3e27a80578d022301611363b890107244e12354"><code>b3e27a8</code></a> Update the pin on Accelerate (<a href="https://redirect.github.com/huggingface/transformers/issues/24110">#24110</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/53e1f5cf66d320b9c809f3940c707b6fef435d2d"><code>53e1f5c</code></a> [<code>Trainer</code>] Correct behavior of <code>_load_best_model</code> for PEFT models (<a href="https://redirect.github.com/huggingface/transformers/issues/24103">#24103</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/17db177714b03103bb94cd71b7dd414bc63bffd5"><code>17db177</code></a> reset accelerate env variables after each test (<a href="https://redirect.github.com/huggingface/transformers/issues/24107">#24107</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/905892f09027cab690918c7766fea1bb51bcdd26"><code>905892f</code></a> Release: v4.30.0</li> <li><a href="https://github.com/huggingface/transformers/commit/c3572e6bfba13ce6dc3fedb05cd1a946ea109576"><code>c3572e6</code></a> Add AzureOpenAiAgent (<a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/5eb3d3c7023ed0522d3c743ee2e13d896a3aa788"><code>5eb3d3c</code></a> Up pinned accelerate version (<a href="https://redirect.github.com/huggingface/transformers/issues/24089">#24089</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/d1c039e39864a41f6eb8b770a65f123c40164ea5"><code>d1c039e</code></a> fix accelerator prepare during eval only mode (<a href="https://redirect.github.com/huggingface/transformers/issues/24014">#24014</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/2c887cf8e0cb1ac96d28361ff3235a77f83c36ee"><code>2c887cf</code></a> Do not prepare lr scheduler as it as the right number of steps (<a href="https://redirect.github.com/huggingface/transformers/issues/24088">#24088</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/12298cb65c7e9d615b749dde935a0b4966f4ae49"><code>12298cb</code></a> fix executable batch size issue (<a href="https://redirect.github.com/huggingface/transformers/issues/24067">#24067</a>)</li> <li>Additional commits viewable in <a href="https://github.com/huggingface/transformers/compare/v4.26.1...v4.30.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=transformers&package-manager=pip&previous-version=4.26.1&new-version=4.30.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24222/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24222/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24222", "html_url": "https://github.com/huggingface/transformers/pull/24222", "diff_url": "https://github.com/huggingface/transformers/pull/24222.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24222.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24221
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24221/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24221/comments
https://api.github.com/repos/huggingface/transformers/issues/24221/events
https://github.com/huggingface/transformers/pull/24221
1,753,643,747
PR_kwDOCUB6oc5S0ign
24,221
Bump transformers from 4.26.0 to 4.30.0 in /examples/research_projects/vqgan-clip
{ "login": "dependabot[bot]", "id": 49699333, "node_id": "MDM6Qm90NDk2OTkzMzM=", "avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dependabot%5Bbot%5D", "html_url": "https://github.com/apps/dependabot", "followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers", "following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}", "gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}", "starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions", "organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs", "repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos", "events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}", "received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events", "type": "Bot", "site_admin": false }
[ { "id": 1905493434, "node_id": "MDU6TGFiZWwxOTA1NDkzNDM0", "url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies", "name": "dependencies", "color": "0366d6", "default": false, "description": "Pull requests that update a dependency file" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@dependabot ignore this major version", "OK, I won't notify you about version 4.x.x again, unless you re-open this PR. 😢" ]
1,686
1,686
1,686
CONTRIBUTOR
null
Bumps [transformers](https://github.com/huggingface/transformers) from 4.26.0 to 4.30.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/huggingface/transformers/releases">transformers's releases</a>.</em></p> <blockquote> <h2>v4.30.0: 100k, Agents improvements, Safetensors core dependency, Swiftformer, Autoformer, MobileViTv2, timm-as-a-backbone</h2> <h2>100k</h2> <p>Transformers has just reached 100k stars on GitHub, and to celebrate we wanted to highlight 100 projects in the vicinity of <code>transformers</code> and we have decided to create an <a href="https://github.com/huggingface/transformers/blob/main/awesome-transformers.md">awesome-transformers</a> page to do just that.</p> <p>We accept PRs to add projects to the list!</p> <ul> <li>Top 100 by <a href="https://github.com/LysandreJik"><code>@​LysandreJik</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22912">#22912</a></li> <li>Add LlamaIndex to awesome-transformers.md by <a href="https://github.com/ravi03071991"><code>@​ravi03071991</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23484">#23484</a></li> <li>add cleanlab to awesome-transformers tools list by <a href="https://github.com/jwmueller"><code>@​jwmueller</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23440">#23440</a></li> </ul> <h2>4-bit quantization and QLoRA</h2> <p>By leveraging the <code>bitsandbytes</code> library by <a href="https://github.com/TimDettmers"><code>@​TimDettmers</code></a>, we add 4-bit support to <code>transformers</code> models!</p> <ul> <li>4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) by <a href="https://github.com/TimDettmers"><code>@​TimDettmers</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23479">#23479</a></li> </ul> <h2>Agents</h2> <p>The Agents framework has been improved and continues to be stabilized. Among bug fixes, here are the important new features that were added:</p> <ul> <li>Local agent capabilities, to load a generative model directly from <code>transformers</code> instead of relying on APIs.</li> <li>Prompts are now hosted on the Hub, which means that anyone can fork the prompts and update them with theirs, to let other community contributors re-use them</li> <li>We add an <code>AzureOpenAiAgent</code> class to support Azure OpenAI agents.</li> </ul> <ul> <li>Add local agent by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23438">#23438</a></li> <li>Enable prompts on the Hub by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23662">#23662</a></li> <li>Add AzureOpenAiAgent by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a></li> </ul> <h2>Safetensors</h2> <p>The <code>safetensors</code> library is a safe serialization framework for machine learning tensors. It has been audited and will become the default serialization framework for several organizations (Hugging Face, EleutherAI, Stability AI).</p> <p>It has now become a core dependency of <code>transformers</code>.</p> <ul> <li>Making <code>safetensors</code> a core dependency. by <a href="https://github.com/Narsil"><code>@​Narsil</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23254">#23254</a></li> </ul> <h2>New models</h2> <h3>Swiftformer</h3> <p>The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called ‘SwiftFormer’ is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2.</p> <ul> <li>Add swiftformer by <a href="https://github.com/shehanmunasinghe"><code>@​shehanmunasinghe</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22686">#22686</a></li> </ul> <h3>Autoformer</h3> <p>This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.</p> <ul> <li>[Time-Series] Autoformer model by <a href="https://github.com/elisim"><code>@​elisim</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/21891">#21891</a></li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/huggingface/transformers/commit/fe861e578f50dc9c06de33cd361d2f625017e624"><code>fe861e5</code></a> [<code>GPT2</code>] Add correct keys on <code>_keys_to_ignore_on_load_unexpected</code> on all chil...</li> <li><a href="https://github.com/huggingface/transformers/commit/b3e27a80578d022301611363b890107244e12354"><code>b3e27a8</code></a> Update the pin on Accelerate (<a href="https://redirect.github.com/huggingface/transformers/issues/24110">#24110</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/53e1f5cf66d320b9c809f3940c707b6fef435d2d"><code>53e1f5c</code></a> [<code>Trainer</code>] Correct behavior of <code>_load_best_model</code> for PEFT models (<a href="https://redirect.github.com/huggingface/transformers/issues/24103">#24103</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/17db177714b03103bb94cd71b7dd414bc63bffd5"><code>17db177</code></a> reset accelerate env variables after each test (<a href="https://redirect.github.com/huggingface/transformers/issues/24107">#24107</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/905892f09027cab690918c7766fea1bb51bcdd26"><code>905892f</code></a> Release: v4.30.0</li> <li><a href="https://github.com/huggingface/transformers/commit/c3572e6bfba13ce6dc3fedb05cd1a946ea109576"><code>c3572e6</code></a> Add AzureOpenAiAgent (<a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/5eb3d3c7023ed0522d3c743ee2e13d896a3aa788"><code>5eb3d3c</code></a> Up pinned accelerate version (<a href="https://redirect.github.com/huggingface/transformers/issues/24089">#24089</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/d1c039e39864a41f6eb8b770a65f123c40164ea5"><code>d1c039e</code></a> fix accelerator prepare during eval only mode (<a href="https://redirect.github.com/huggingface/transformers/issues/24014">#24014</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/2c887cf8e0cb1ac96d28361ff3235a77f83c36ee"><code>2c887cf</code></a> Do not prepare lr scheduler as it as the right number of steps (<a href="https://redirect.github.com/huggingface/transformers/issues/24088">#24088</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/12298cb65c7e9d615b749dde935a0b4966f4ae49"><code>12298cb</code></a> fix executable batch size issue (<a href="https://redirect.github.com/huggingface/transformers/issues/24067">#24067</a>)</li> <li>Additional commits viewable in <a href="https://github.com/huggingface/transformers/compare/v4.26.0...v4.30.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=transformers&package-manager=pip&previous-version=4.26.0&new-version=4.30.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24221/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24221/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24221", "html_url": "https://github.com/huggingface/transformers/pull/24221", "diff_url": "https://github.com/huggingface/transformers/pull/24221.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24221.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24220
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24220/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24220/comments
https://api.github.com/repos/huggingface/transformers/issues/24220/events
https://github.com/huggingface/transformers/pull/24220
1,753,637,872
PR_kwDOCUB6oc5S0hMc
24,220
Bump transformers from 4.21.1 to 4.30.0 in /examples/research_projects/codeparrot/examples
{ "login": "dependabot[bot]", "id": 49699333, "node_id": "MDM6Qm90NDk2OTkzMzM=", "avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dependabot%5Bbot%5D", "html_url": "https://github.com/apps/dependabot", "followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers", "following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}", "gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}", "starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions", "organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs", "repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos", "events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}", "received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events", "type": "Bot", "site_admin": false }
[ { "id": 1905493434, "node_id": "MDU6TGFiZWwxOTA1NDkzNDM0", "url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies", "name": "dependencies", "color": "0366d6", "default": false, "description": "Pull requests that update a dependency file" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@dependabot ignore this major version", "OK, I won't notify you about version 4.x.x again, unless you re-open this PR. 😢" ]
1,686
1,686
1,686
CONTRIBUTOR
null
Bumps [transformers](https://github.com/huggingface/transformers) from 4.21.1 to 4.30.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/huggingface/transformers/releases">transformers's releases</a>.</em></p> <blockquote> <h2>v4.30.0: 100k, Agents improvements, Safetensors core dependency, Swiftformer, Autoformer, MobileViTv2, timm-as-a-backbone</h2> <h2>100k</h2> <p>Transformers has just reached 100k stars on GitHub, and to celebrate we wanted to highlight 100 projects in the vicinity of <code>transformers</code> and we have decided to create an <a href="https://github.com/huggingface/transformers/blob/main/awesome-transformers.md">awesome-transformers</a> page to do just that.</p> <p>We accept PRs to add projects to the list!</p> <ul> <li>Top 100 by <a href="https://github.com/LysandreJik"><code>@​LysandreJik</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22912">#22912</a></li> <li>Add LlamaIndex to awesome-transformers.md by <a href="https://github.com/ravi03071991"><code>@​ravi03071991</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23484">#23484</a></li> <li>add cleanlab to awesome-transformers tools list by <a href="https://github.com/jwmueller"><code>@​jwmueller</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23440">#23440</a></li> </ul> <h2>4-bit quantization and QLoRA</h2> <p>By leveraging the <code>bitsandbytes</code> library by <a href="https://github.com/TimDettmers"><code>@​TimDettmers</code></a>, we add 4-bit support to <code>transformers</code> models!</p> <ul> <li>4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) by <a href="https://github.com/TimDettmers"><code>@​TimDettmers</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23479">#23479</a></li> </ul> <h2>Agents</h2> <p>The Agents framework has been improved and continues to be stabilized. Among bug fixes, here are the important new features that were added:</p> <ul> <li>Local agent capabilities, to load a generative model directly from <code>transformers</code> instead of relying on APIs.</li> <li>Prompts are now hosted on the Hub, which means that anyone can fork the prompts and update them with theirs, to let other community contributors re-use them</li> <li>We add an <code>AzureOpenAiAgent</code> class to support Azure OpenAI agents.</li> </ul> <ul> <li>Add local agent by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23438">#23438</a></li> <li>Enable prompts on the Hub by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23662">#23662</a></li> <li>Add AzureOpenAiAgent by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a></li> </ul> <h2>Safetensors</h2> <p>The <code>safetensors</code> library is a safe serialization framework for machine learning tensors. It has been audited and will become the default serialization framework for several organizations (Hugging Face, EleutherAI, Stability AI).</p> <p>It has now become a core dependency of <code>transformers</code>.</p> <ul> <li>Making <code>safetensors</code> a core dependency. by <a href="https://github.com/Narsil"><code>@​Narsil</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23254">#23254</a></li> </ul> <h2>New models</h2> <h3>Swiftformer</h3> <p>The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called ‘SwiftFormer’ is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2.</p> <ul> <li>Add swiftformer by <a href="https://github.com/shehanmunasinghe"><code>@​shehanmunasinghe</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22686">#22686</a></li> </ul> <h3>Autoformer</h3> <p>This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.</p> <ul> <li>[Time-Series] Autoformer model by <a href="https://github.com/elisim"><code>@​elisim</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/21891">#21891</a></li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/huggingface/transformers/commit/fe861e578f50dc9c06de33cd361d2f625017e624"><code>fe861e5</code></a> [<code>GPT2</code>] Add correct keys on <code>_keys_to_ignore_on_load_unexpected</code> on all chil...</li> <li><a href="https://github.com/huggingface/transformers/commit/b3e27a80578d022301611363b890107244e12354"><code>b3e27a8</code></a> Update the pin on Accelerate (<a href="https://redirect.github.com/huggingface/transformers/issues/24110">#24110</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/53e1f5cf66d320b9c809f3940c707b6fef435d2d"><code>53e1f5c</code></a> [<code>Trainer</code>] Correct behavior of <code>_load_best_model</code> for PEFT models (<a href="https://redirect.github.com/huggingface/transformers/issues/24103">#24103</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/17db177714b03103bb94cd71b7dd414bc63bffd5"><code>17db177</code></a> reset accelerate env variables after each test (<a href="https://redirect.github.com/huggingface/transformers/issues/24107">#24107</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/905892f09027cab690918c7766fea1bb51bcdd26"><code>905892f</code></a> Release: v4.30.0</li> <li><a href="https://github.com/huggingface/transformers/commit/c3572e6bfba13ce6dc3fedb05cd1a946ea109576"><code>c3572e6</code></a> Add AzureOpenAiAgent (<a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/5eb3d3c7023ed0522d3c743ee2e13d896a3aa788"><code>5eb3d3c</code></a> Up pinned accelerate version (<a href="https://redirect.github.com/huggingface/transformers/issues/24089">#24089</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/d1c039e39864a41f6eb8b770a65f123c40164ea5"><code>d1c039e</code></a> fix accelerator prepare during eval only mode (<a href="https://redirect.github.com/huggingface/transformers/issues/24014">#24014</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/2c887cf8e0cb1ac96d28361ff3235a77f83c36ee"><code>2c887cf</code></a> Do not prepare lr scheduler as it as the right number of steps (<a href="https://redirect.github.com/huggingface/transformers/issues/24088">#24088</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/12298cb65c7e9d615b749dde935a0b4966f4ae49"><code>12298cb</code></a> fix executable batch size issue (<a href="https://redirect.github.com/huggingface/transformers/issues/24067">#24067</a>)</li> <li>Additional commits viewable in <a href="https://github.com/huggingface/transformers/compare/v4.21.1...v4.30.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=transformers&package-manager=pip&previous-version=4.21.1&new-version=4.30.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24220/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24220/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24220", "html_url": "https://github.com/huggingface/transformers/pull/24220", "diff_url": "https://github.com/huggingface/transformers/pull/24220.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24220.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24219
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24219/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24219/comments
https://api.github.com/repos/huggingface/transformers/issues/24219/events
https://github.com/huggingface/transformers/pull/24219
1,753,634,437
PR_kwDOCUB6oc5S0gbF
24,219
Bump transformers from 4.19.0 to 4.30.0 in /examples/research_projects/codeparrot
{ "login": "dependabot[bot]", "id": 49699333, "node_id": "MDM6Qm90NDk2OTkzMzM=", "avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dependabot%5Bbot%5D", "html_url": "https://github.com/apps/dependabot", "followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers", "following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}", "gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}", "starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions", "organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs", "repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos", "events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}", "received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events", "type": "Bot", "site_admin": false }
[ { "id": 1905493434, "node_id": "MDU6TGFiZWwxOTA1NDkzNDM0", "url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies", "name": "dependencies", "color": "0366d6", "default": false, "description": "Pull requests that update a dependency file" } ]
closed
false
null
[]
[ "@dependabot ignore this major version", "OK, I won't notify you about version 4.x.x again, unless you re-open this PR. 😢" ]
1,686
1,686
1,686
CONTRIBUTOR
null
Bumps [transformers](https://github.com/huggingface/transformers) from 4.19.0 to 4.30.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/huggingface/transformers/releases">transformers's releases</a>.</em></p> <blockquote> <h2>v4.30.0: 100k, Agents improvements, Safetensors core dependency, Swiftformer, Autoformer, MobileViTv2, timm-as-a-backbone</h2> <h2>100k</h2> <p>Transformers has just reached 100k stars on GitHub, and to celebrate we wanted to highlight 100 projects in the vicinity of <code>transformers</code> and we have decided to create an <a href="https://github.com/huggingface/transformers/blob/main/awesome-transformers.md">awesome-transformers</a> page to do just that.</p> <p>We accept PRs to add projects to the list!</p> <ul> <li>Top 100 by <a href="https://github.com/LysandreJik"><code>@​LysandreJik</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22912">#22912</a></li> <li>Add LlamaIndex to awesome-transformers.md by <a href="https://github.com/ravi03071991"><code>@​ravi03071991</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23484">#23484</a></li> <li>add cleanlab to awesome-transformers tools list by <a href="https://github.com/jwmueller"><code>@​jwmueller</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23440">#23440</a></li> </ul> <h2>4-bit quantization and QLoRA</h2> <p>By leveraging the <code>bitsandbytes</code> library by <a href="https://github.com/TimDettmers"><code>@​TimDettmers</code></a>, we add 4-bit support to <code>transformers</code> models!</p> <ul> <li>4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) by <a href="https://github.com/TimDettmers"><code>@​TimDettmers</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23479">#23479</a></li> </ul> <h2>Agents</h2> <p>The Agents framework has been improved and continues to be stabilized. Among bug fixes, here are the important new features that were added:</p> <ul> <li>Local agent capabilities, to load a generative model directly from <code>transformers</code> instead of relying on APIs.</li> <li>Prompts are now hosted on the Hub, which means that anyone can fork the prompts and update them with theirs, to let other community contributors re-use them</li> <li>We add an <code>AzureOpenAiAgent</code> class to support Azure OpenAI agents.</li> </ul> <ul> <li>Add local agent by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23438">#23438</a></li> <li>Enable prompts on the Hub by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23662">#23662</a></li> <li>Add AzureOpenAiAgent by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a></li> </ul> <h2>Safetensors</h2> <p>The <code>safetensors</code> library is a safe serialization framework for machine learning tensors. It has been audited and will become the default serialization framework for several organizations (Hugging Face, EleutherAI, Stability AI).</p> <p>It has now become a core dependency of <code>transformers</code>.</p> <ul> <li>Making <code>safetensors</code> a core dependency. by <a href="https://github.com/Narsil"><code>@​Narsil</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23254">#23254</a></li> </ul> <h2>New models</h2> <h3>Swiftformer</h3> <p>The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called ‘SwiftFormer’ is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2.</p> <ul> <li>Add swiftformer by <a href="https://github.com/shehanmunasinghe"><code>@​shehanmunasinghe</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22686">#22686</a></li> </ul> <h3>Autoformer</h3> <p>This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.</p> <ul> <li>[Time-Series] Autoformer model by <a href="https://github.com/elisim"><code>@​elisim</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/21891">#21891</a></li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/huggingface/transformers/commit/fe861e578f50dc9c06de33cd361d2f625017e624"><code>fe861e5</code></a> [<code>GPT2</code>] Add correct keys on <code>_keys_to_ignore_on_load_unexpected</code> on all chil...</li> <li><a href="https://github.com/huggingface/transformers/commit/b3e27a80578d022301611363b890107244e12354"><code>b3e27a8</code></a> Update the pin on Accelerate (<a href="https://redirect.github.com/huggingface/transformers/issues/24110">#24110</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/53e1f5cf66d320b9c809f3940c707b6fef435d2d"><code>53e1f5c</code></a> [<code>Trainer</code>] Correct behavior of <code>_load_best_model</code> for PEFT models (<a href="https://redirect.github.com/huggingface/transformers/issues/24103">#24103</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/17db177714b03103bb94cd71b7dd414bc63bffd5"><code>17db177</code></a> reset accelerate env variables after each test (<a href="https://redirect.github.com/huggingface/transformers/issues/24107">#24107</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/905892f09027cab690918c7766fea1bb51bcdd26"><code>905892f</code></a> Release: v4.30.0</li> <li><a href="https://github.com/huggingface/transformers/commit/c3572e6bfba13ce6dc3fedb05cd1a946ea109576"><code>c3572e6</code></a> Add AzureOpenAiAgent (<a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/5eb3d3c7023ed0522d3c743ee2e13d896a3aa788"><code>5eb3d3c</code></a> Up pinned accelerate version (<a href="https://redirect.github.com/huggingface/transformers/issues/24089">#24089</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/d1c039e39864a41f6eb8b770a65f123c40164ea5"><code>d1c039e</code></a> fix accelerator prepare during eval only mode (<a href="https://redirect.github.com/huggingface/transformers/issues/24014">#24014</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/2c887cf8e0cb1ac96d28361ff3235a77f83c36ee"><code>2c887cf</code></a> Do not prepare lr scheduler as it as the right number of steps (<a href="https://redirect.github.com/huggingface/transformers/issues/24088">#24088</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/12298cb65c7e9d615b749dde935a0b4966f4ae49"><code>12298cb</code></a> fix executable batch size issue (<a href="https://redirect.github.com/huggingface/transformers/issues/24067">#24067</a>)</li> <li>Additional commits viewable in <a href="https://github.com/huggingface/transformers/compare/v4.19.0...v4.30.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=transformers&package-manager=pip&previous-version=4.19.0&new-version=4.30.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24219/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24219/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24219", "html_url": "https://github.com/huggingface/transformers/pull/24219", "diff_url": "https://github.com/huggingface/transformers/pull/24219.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24219.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24218
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24218/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24218/comments
https://api.github.com/repos/huggingface/transformers/issues/24218/events
https://github.com/huggingface/transformers/pull/24218
1,753,626,407
PR_kwDOCUB6oc5S0ena
24,218
Bump transformers from 3.5.1 to 4.30.0 in /examples/research_projects/deebert
{ "login": "dependabot[bot]", "id": 49699333, "node_id": "MDM6Qm90NDk2OTkzMzM=", "avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dependabot%5Bbot%5D", "html_url": "https://github.com/apps/dependabot", "followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers", "following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}", "gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}", "starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions", "organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs", "repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos", "events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}", "received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events", "type": "Bot", "site_admin": false }
[ { "id": 1905493434, "node_id": "MDU6TGFiZWwxOTA1NDkzNDM0", "url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies", "name": "dependencies", "color": "0366d6", "default": false, "description": "Pull requests that update a dependency file" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@dependabot ignore this major version", "OK, I won't notify you about version 4.x.x again, unless you re-open this PR. 😢" ]
1,686
1,686
1,686
CONTRIBUTOR
null
Bumps [transformers](https://github.com/huggingface/transformers) from 3.5.1 to 4.30.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/huggingface/transformers/releases">transformers's releases</a>.</em></p> <blockquote> <h2>v4.30.0: 100k, Agents improvements, Safetensors core dependency, Swiftformer, Autoformer, MobileViTv2, timm-as-a-backbone</h2> <h2>100k</h2> <p>Transformers has just reached 100k stars on GitHub, and to celebrate we wanted to highlight 100 projects in the vicinity of <code>transformers</code> and we have decided to create an <a href="https://github.com/huggingface/transformers/blob/main/awesome-transformers.md">awesome-transformers</a> page to do just that.</p> <p>We accept PRs to add projects to the list!</p> <ul> <li>Top 100 by <a href="https://github.com/LysandreJik"><code>@​LysandreJik</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22912">#22912</a></li> <li>Add LlamaIndex to awesome-transformers.md by <a href="https://github.com/ravi03071991"><code>@​ravi03071991</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23484">#23484</a></li> <li>add cleanlab to awesome-transformers tools list by <a href="https://github.com/jwmueller"><code>@​jwmueller</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23440">#23440</a></li> </ul> <h2>4-bit quantization and QLoRA</h2> <p>By leveraging the <code>bitsandbytes</code> library by <a href="https://github.com/TimDettmers"><code>@​TimDettmers</code></a>, we add 4-bit support to <code>transformers</code> models!</p> <ul> <li>4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) by <a href="https://github.com/TimDettmers"><code>@​TimDettmers</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23479">#23479</a></li> </ul> <h2>Agents</h2> <p>The Agents framework has been improved and continues to be stabilized. Among bug fixes, here are the important new features that were added:</p> <ul> <li>Local agent capabilities, to load a generative model directly from <code>transformers</code> instead of relying on APIs.</li> <li>Prompts are now hosted on the Hub, which means that anyone can fork the prompts and update them with theirs, to let other community contributors re-use them</li> <li>We add an <code>AzureOpenAiAgent</code> class to support Azure OpenAI agents.</li> </ul> <ul> <li>Add local agent by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23438">#23438</a></li> <li>Enable prompts on the Hub by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23662">#23662</a></li> <li>Add AzureOpenAiAgent by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a></li> </ul> <h2>Safetensors</h2> <p>The <code>safetensors</code> library is a safe serialization framework for machine learning tensors. It has been audited and will become the default serialization framework for several organizations (Hugging Face, EleutherAI, Stability AI).</p> <p>It has now become a core dependency of <code>transformers</code>.</p> <ul> <li>Making <code>safetensors</code> a core dependency. by <a href="https://github.com/Narsil"><code>@​Narsil</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23254">#23254</a></li> </ul> <h2>New models</h2> <h3>Swiftformer</h3> <p>The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called ‘SwiftFormer’ is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2.</p> <ul> <li>Add swiftformer by <a href="https://github.com/shehanmunasinghe"><code>@​shehanmunasinghe</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22686">#22686</a></li> </ul> <h3>Autoformer</h3> <p>This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.</p> <ul> <li>[Time-Series] Autoformer model by <a href="https://github.com/elisim"><code>@​elisim</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/21891">#21891</a></li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/huggingface/transformers/commit/fe861e578f50dc9c06de33cd361d2f625017e624"><code>fe861e5</code></a> [<code>GPT2</code>] Add correct keys on <code>_keys_to_ignore_on_load_unexpected</code> on all chil...</li> <li><a href="https://github.com/huggingface/transformers/commit/b3e27a80578d022301611363b890107244e12354"><code>b3e27a8</code></a> Update the pin on Accelerate (<a href="https://redirect.github.com/huggingface/transformers/issues/24110">#24110</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/53e1f5cf66d320b9c809f3940c707b6fef435d2d"><code>53e1f5c</code></a> [<code>Trainer</code>] Correct behavior of <code>_load_best_model</code> for PEFT models (<a href="https://redirect.github.com/huggingface/transformers/issues/24103">#24103</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/17db177714b03103bb94cd71b7dd414bc63bffd5"><code>17db177</code></a> reset accelerate env variables after each test (<a href="https://redirect.github.com/huggingface/transformers/issues/24107">#24107</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/905892f09027cab690918c7766fea1bb51bcdd26"><code>905892f</code></a> Release: v4.30.0</li> <li><a href="https://github.com/huggingface/transformers/commit/c3572e6bfba13ce6dc3fedb05cd1a946ea109576"><code>c3572e6</code></a> Add AzureOpenAiAgent (<a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/5eb3d3c7023ed0522d3c743ee2e13d896a3aa788"><code>5eb3d3c</code></a> Up pinned accelerate version (<a href="https://redirect.github.com/huggingface/transformers/issues/24089">#24089</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/d1c039e39864a41f6eb8b770a65f123c40164ea5"><code>d1c039e</code></a> fix accelerator prepare during eval only mode (<a href="https://redirect.github.com/huggingface/transformers/issues/24014">#24014</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/2c887cf8e0cb1ac96d28361ff3235a77f83c36ee"><code>2c887cf</code></a> Do not prepare lr scheduler as it as the right number of steps (<a href="https://redirect.github.com/huggingface/transformers/issues/24088">#24088</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/12298cb65c7e9d615b749dde935a0b4966f4ae49"><code>12298cb</code></a> fix executable batch size issue (<a href="https://redirect.github.com/huggingface/transformers/issues/24067">#24067</a>)</li> <li>Additional commits viewable in <a href="https://github.com/huggingface/transformers/compare/v3.5.1...v4.30.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=transformers&package-manager=pip&previous-version=3.5.1&new-version=4.30.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24218/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24218/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24218", "html_url": "https://github.com/huggingface/transformers/pull/24218", "diff_url": "https://github.com/huggingface/transformers/pull/24218.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24218.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24217
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24217/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24217/comments
https://api.github.com/repos/huggingface/transformers/issues/24217/events
https://github.com/huggingface/transformers/pull/24217
1,753,626,325
PR_kwDOCUB6oc5S0emR
24,217
Bump transformers from 3.5.1 to 4.30.0 in /examples/research_projects/bertology
{ "login": "dependabot[bot]", "id": 49699333, "node_id": "MDM6Qm90NDk2OTkzMzM=", "avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dependabot%5Bbot%5D", "html_url": "https://github.com/apps/dependabot", "followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers", "following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}", "gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}", "starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions", "organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs", "repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos", "events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}", "received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events", "type": "Bot", "site_admin": false }
[ { "id": 1905493434, "node_id": "MDU6TGFiZWwxOTA1NDkzNDM0", "url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies", "name": "dependencies", "color": "0366d6", "default": false, "description": "Pull requests that update a dependency file" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@dependabot ignore this major version", "OK, I won't notify you about version 4.x.x again, unless you re-open this PR. 😢" ]
1,686
1,686
1,686
CONTRIBUTOR
null
Bumps [transformers](https://github.com/huggingface/transformers) from 3.5.1 to 4.30.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/huggingface/transformers/releases">transformers's releases</a>.</em></p> <blockquote> <h2>v4.30.0: 100k, Agents improvements, Safetensors core dependency, Swiftformer, Autoformer, MobileViTv2, timm-as-a-backbone</h2> <h2>100k</h2> <p>Transformers has just reached 100k stars on GitHub, and to celebrate we wanted to highlight 100 projects in the vicinity of <code>transformers</code> and we have decided to create an <a href="https://github.com/huggingface/transformers/blob/main/awesome-transformers.md">awesome-transformers</a> page to do just that.</p> <p>We accept PRs to add projects to the list!</p> <ul> <li>Top 100 by <a href="https://github.com/LysandreJik"><code>@​LysandreJik</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22912">#22912</a></li> <li>Add LlamaIndex to awesome-transformers.md by <a href="https://github.com/ravi03071991"><code>@​ravi03071991</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23484">#23484</a></li> <li>add cleanlab to awesome-transformers tools list by <a href="https://github.com/jwmueller"><code>@​jwmueller</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23440">#23440</a></li> </ul> <h2>4-bit quantization and QLoRA</h2> <p>By leveraging the <code>bitsandbytes</code> library by <a href="https://github.com/TimDettmers"><code>@​TimDettmers</code></a>, we add 4-bit support to <code>transformers</code> models!</p> <ul> <li>4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) by <a href="https://github.com/TimDettmers"><code>@​TimDettmers</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23479">#23479</a></li> </ul> <h2>Agents</h2> <p>The Agents framework has been improved and continues to be stabilized. Among bug fixes, here are the important new features that were added:</p> <ul> <li>Local agent capabilities, to load a generative model directly from <code>transformers</code> instead of relying on APIs.</li> <li>Prompts are now hosted on the Hub, which means that anyone can fork the prompts and update them with theirs, to let other community contributors re-use them</li> <li>We add an <code>AzureOpenAiAgent</code> class to support Azure OpenAI agents.</li> </ul> <ul> <li>Add local agent by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23438">#23438</a></li> <li>Enable prompts on the Hub by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23662">#23662</a></li> <li>Add AzureOpenAiAgent by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a></li> </ul> <h2>Safetensors</h2> <p>The <code>safetensors</code> library is a safe serialization framework for machine learning tensors. It has been audited and will become the default serialization framework for several organizations (Hugging Face, EleutherAI, Stability AI).</p> <p>It has now become a core dependency of <code>transformers</code>.</p> <ul> <li>Making <code>safetensors</code> a core dependency. by <a href="https://github.com/Narsil"><code>@​Narsil</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23254">#23254</a></li> </ul> <h2>New models</h2> <h3>Swiftformer</h3> <p>The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called ‘SwiftFormer’ is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2.</p> <ul> <li>Add swiftformer by <a href="https://github.com/shehanmunasinghe"><code>@​shehanmunasinghe</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22686">#22686</a></li> </ul> <h3>Autoformer</h3> <p>This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.</p> <ul> <li>[Time-Series] Autoformer model by <a href="https://github.com/elisim"><code>@​elisim</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/21891">#21891</a></li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/huggingface/transformers/commit/fe861e578f50dc9c06de33cd361d2f625017e624"><code>fe861e5</code></a> [<code>GPT2</code>] Add correct keys on <code>_keys_to_ignore_on_load_unexpected</code> on all chil...</li> <li><a href="https://github.com/huggingface/transformers/commit/b3e27a80578d022301611363b890107244e12354"><code>b3e27a8</code></a> Update the pin on Accelerate (<a href="https://redirect.github.com/huggingface/transformers/issues/24110">#24110</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/53e1f5cf66d320b9c809f3940c707b6fef435d2d"><code>53e1f5c</code></a> [<code>Trainer</code>] Correct behavior of <code>_load_best_model</code> for PEFT models (<a href="https://redirect.github.com/huggingface/transformers/issues/24103">#24103</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/17db177714b03103bb94cd71b7dd414bc63bffd5"><code>17db177</code></a> reset accelerate env variables after each test (<a href="https://redirect.github.com/huggingface/transformers/issues/24107">#24107</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/905892f09027cab690918c7766fea1bb51bcdd26"><code>905892f</code></a> Release: v4.30.0</li> <li><a href="https://github.com/huggingface/transformers/commit/c3572e6bfba13ce6dc3fedb05cd1a946ea109576"><code>c3572e6</code></a> Add AzureOpenAiAgent (<a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/5eb3d3c7023ed0522d3c743ee2e13d896a3aa788"><code>5eb3d3c</code></a> Up pinned accelerate version (<a href="https://redirect.github.com/huggingface/transformers/issues/24089">#24089</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/d1c039e39864a41f6eb8b770a65f123c40164ea5"><code>d1c039e</code></a> fix accelerator prepare during eval only mode (<a href="https://redirect.github.com/huggingface/transformers/issues/24014">#24014</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/2c887cf8e0cb1ac96d28361ff3235a77f83c36ee"><code>2c887cf</code></a> Do not prepare lr scheduler as it as the right number of steps (<a href="https://redirect.github.com/huggingface/transformers/issues/24088">#24088</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/12298cb65c7e9d615b749dde935a0b4966f4ae49"><code>12298cb</code></a> fix executable batch size issue (<a href="https://redirect.github.com/huggingface/transformers/issues/24067">#24067</a>)</li> <li>Additional commits viewable in <a href="https://github.com/huggingface/transformers/compare/v3.5.1...v4.30.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=transformers&package-manager=pip&previous-version=3.5.1&new-version=4.30.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24217/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24217/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24217", "html_url": "https://github.com/huggingface/transformers/pull/24217", "diff_url": "https://github.com/huggingface/transformers/pull/24217.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24217.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24216
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24216/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24216/comments
https://api.github.com/repos/huggingface/transformers/issues/24216/events
https://github.com/huggingface/transformers/pull/24216
1,753,625,729
PR_kwDOCUB6oc5S0ed9
24,216
Bump transformers from 3.5.1 to 4.30.0 in /examples/research_projects/pplm
{ "login": "dependabot[bot]", "id": 49699333, "node_id": "MDM6Qm90NDk2OTkzMzM=", "avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dependabot%5Bbot%5D", "html_url": "https://github.com/apps/dependabot", "followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers", "following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}", "gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}", "starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions", "organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs", "repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos", "events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}", "received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events", "type": "Bot", "site_admin": false }
[ { "id": 1905493434, "node_id": "MDU6TGFiZWwxOTA1NDkzNDM0", "url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies", "name": "dependencies", "color": "0366d6", "default": false, "description": "Pull requests that update a dependency file" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@dependabot ignore this major version", "OK, I won't notify you about version 4.x.x again, unless you re-open this PR. 😢" ]
1,686
1,686
1,686
CONTRIBUTOR
null
Bumps [transformers](https://github.com/huggingface/transformers) from 3.5.1 to 4.30.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/huggingface/transformers/releases">transformers's releases</a>.</em></p> <blockquote> <h2>v4.30.0: 100k, Agents improvements, Safetensors core dependency, Swiftformer, Autoformer, MobileViTv2, timm-as-a-backbone</h2> <h2>100k</h2> <p>Transformers has just reached 100k stars on GitHub, and to celebrate we wanted to highlight 100 projects in the vicinity of <code>transformers</code> and we have decided to create an <a href="https://github.com/huggingface/transformers/blob/main/awesome-transformers.md">awesome-transformers</a> page to do just that.</p> <p>We accept PRs to add projects to the list!</p> <ul> <li>Top 100 by <a href="https://github.com/LysandreJik"><code>@​LysandreJik</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22912">#22912</a></li> <li>Add LlamaIndex to awesome-transformers.md by <a href="https://github.com/ravi03071991"><code>@​ravi03071991</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23484">#23484</a></li> <li>add cleanlab to awesome-transformers tools list by <a href="https://github.com/jwmueller"><code>@​jwmueller</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23440">#23440</a></li> </ul> <h2>4-bit quantization and QLoRA</h2> <p>By leveraging the <code>bitsandbytes</code> library by <a href="https://github.com/TimDettmers"><code>@​TimDettmers</code></a>, we add 4-bit support to <code>transformers</code> models!</p> <ul> <li>4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) by <a href="https://github.com/TimDettmers"><code>@​TimDettmers</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23479">#23479</a></li> </ul> <h2>Agents</h2> <p>The Agents framework has been improved and continues to be stabilized. Among bug fixes, here are the important new features that were added:</p> <ul> <li>Local agent capabilities, to load a generative model directly from <code>transformers</code> instead of relying on APIs.</li> <li>Prompts are now hosted on the Hub, which means that anyone can fork the prompts and update them with theirs, to let other community contributors re-use them</li> <li>We add an <code>AzureOpenAiAgent</code> class to support Azure OpenAI agents.</li> </ul> <ul> <li>Add local agent by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23438">#23438</a></li> <li>Enable prompts on the Hub by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23662">#23662</a></li> <li>Add AzureOpenAiAgent by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a></li> </ul> <h2>Safetensors</h2> <p>The <code>safetensors</code> library is a safe serialization framework for machine learning tensors. It has been audited and will become the default serialization framework for several organizations (Hugging Face, EleutherAI, Stability AI).</p> <p>It has now become a core dependency of <code>transformers</code>.</p> <ul> <li>Making <code>safetensors</code> a core dependency. by <a href="https://github.com/Narsil"><code>@​Narsil</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23254">#23254</a></li> </ul> <h2>New models</h2> <h3>Swiftformer</h3> <p>The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called ‘SwiftFormer’ is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2.</p> <ul> <li>Add swiftformer by <a href="https://github.com/shehanmunasinghe"><code>@​shehanmunasinghe</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22686">#22686</a></li> </ul> <h3>Autoformer</h3> <p>This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.</p> <ul> <li>[Time-Series] Autoformer model by <a href="https://github.com/elisim"><code>@​elisim</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/21891">#21891</a></li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/huggingface/transformers/commit/fe861e578f50dc9c06de33cd361d2f625017e624"><code>fe861e5</code></a> [<code>GPT2</code>] Add correct keys on <code>_keys_to_ignore_on_load_unexpected</code> on all chil...</li> <li><a href="https://github.com/huggingface/transformers/commit/b3e27a80578d022301611363b890107244e12354"><code>b3e27a8</code></a> Update the pin on Accelerate (<a href="https://redirect.github.com/huggingface/transformers/issues/24110">#24110</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/53e1f5cf66d320b9c809f3940c707b6fef435d2d"><code>53e1f5c</code></a> [<code>Trainer</code>] Correct behavior of <code>_load_best_model</code> for PEFT models (<a href="https://redirect.github.com/huggingface/transformers/issues/24103">#24103</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/17db177714b03103bb94cd71b7dd414bc63bffd5"><code>17db177</code></a> reset accelerate env variables after each test (<a href="https://redirect.github.com/huggingface/transformers/issues/24107">#24107</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/905892f09027cab690918c7766fea1bb51bcdd26"><code>905892f</code></a> Release: v4.30.0</li> <li><a href="https://github.com/huggingface/transformers/commit/c3572e6bfba13ce6dc3fedb05cd1a946ea109576"><code>c3572e6</code></a> Add AzureOpenAiAgent (<a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/5eb3d3c7023ed0522d3c743ee2e13d896a3aa788"><code>5eb3d3c</code></a> Up pinned accelerate version (<a href="https://redirect.github.com/huggingface/transformers/issues/24089">#24089</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/d1c039e39864a41f6eb8b770a65f123c40164ea5"><code>d1c039e</code></a> fix accelerator prepare during eval only mode (<a href="https://redirect.github.com/huggingface/transformers/issues/24014">#24014</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/2c887cf8e0cb1ac96d28361ff3235a77f83c36ee"><code>2c887cf</code></a> Do not prepare lr scheduler as it as the right number of steps (<a href="https://redirect.github.com/huggingface/transformers/issues/24088">#24088</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/12298cb65c7e9d615b749dde935a0b4966f4ae49"><code>12298cb</code></a> fix executable batch size issue (<a href="https://redirect.github.com/huggingface/transformers/issues/24067">#24067</a>)</li> <li>Additional commits viewable in <a href="https://github.com/huggingface/transformers/compare/v3.5.1...v4.30.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=transformers&package-manager=pip&previous-version=3.5.1&new-version=4.30.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24216/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24216/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24216", "html_url": "https://github.com/huggingface/transformers/pull/24216", "diff_url": "https://github.com/huggingface/transformers/pull/24216.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24216.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24215
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24215/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24215/comments
https://api.github.com/repos/huggingface/transformers/issues/24215/events
https://github.com/huggingface/transformers/pull/24215
1,753,625,727
PR_kwDOCUB6oc5S0ed8
24,215
Bump transformers from 3.5.1 to 4.30.0 in /examples/research_projects/adversarial
{ "login": "dependabot[bot]", "id": 49699333, "node_id": "MDM6Qm90NDk2OTkzMzM=", "avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dependabot%5Bbot%5D", "html_url": "https://github.com/apps/dependabot", "followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers", "following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}", "gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}", "starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions", "organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs", "repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos", "events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}", "received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events", "type": "Bot", "site_admin": false }
[ { "id": 1905493434, "node_id": "MDU6TGFiZWwxOTA1NDkzNDM0", "url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies", "name": "dependencies", "color": "0366d6", "default": false, "description": "Pull requests that update a dependency file" } ]
closed
false
null
[]
[ "@dependabot ignore this major version", "OK, I won't notify you about version 4.x.x again, unless you re-open this PR. 😢" ]
1,686
1,686
1,686
CONTRIBUTOR
null
Bumps [transformers](https://github.com/huggingface/transformers) from 3.5.1 to 4.30.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/huggingface/transformers/releases">transformers's releases</a>.</em></p> <blockquote> <h2>v4.30.0: 100k, Agents improvements, Safetensors core dependency, Swiftformer, Autoformer, MobileViTv2, timm-as-a-backbone</h2> <h2>100k</h2> <p>Transformers has just reached 100k stars on GitHub, and to celebrate we wanted to highlight 100 projects in the vicinity of <code>transformers</code> and we have decided to create an <a href="https://github.com/huggingface/transformers/blob/main/awesome-transformers.md">awesome-transformers</a> page to do just that.</p> <p>We accept PRs to add projects to the list!</p> <ul> <li>Top 100 by <a href="https://github.com/LysandreJik"><code>@​LysandreJik</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22912">#22912</a></li> <li>Add LlamaIndex to awesome-transformers.md by <a href="https://github.com/ravi03071991"><code>@​ravi03071991</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23484">#23484</a></li> <li>add cleanlab to awesome-transformers tools list by <a href="https://github.com/jwmueller"><code>@​jwmueller</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23440">#23440</a></li> </ul> <h2>4-bit quantization and QLoRA</h2> <p>By leveraging the <code>bitsandbytes</code> library by <a href="https://github.com/TimDettmers"><code>@​TimDettmers</code></a>, we add 4-bit support to <code>transformers</code> models!</p> <ul> <li>4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) by <a href="https://github.com/TimDettmers"><code>@​TimDettmers</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23479">#23479</a></li> </ul> <h2>Agents</h2> <p>The Agents framework has been improved and continues to be stabilized. Among bug fixes, here are the important new features that were added:</p> <ul> <li>Local agent capabilities, to load a generative model directly from <code>transformers</code> instead of relying on APIs.</li> <li>Prompts are now hosted on the Hub, which means that anyone can fork the prompts and update them with theirs, to let other community contributors re-use them</li> <li>We add an <code>AzureOpenAiAgent</code> class to support Azure OpenAI agents.</li> </ul> <ul> <li>Add local agent by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23438">#23438</a></li> <li>Enable prompts on the Hub by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23662">#23662</a></li> <li>Add AzureOpenAiAgent by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a></li> </ul> <h2>Safetensors</h2> <p>The <code>safetensors</code> library is a safe serialization framework for machine learning tensors. It has been audited and will become the default serialization framework for several organizations (Hugging Face, EleutherAI, Stability AI).</p> <p>It has now become a core dependency of <code>transformers</code>.</p> <ul> <li>Making <code>safetensors</code> a core dependency. by <a href="https://github.com/Narsil"><code>@​Narsil</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23254">#23254</a></li> </ul> <h2>New models</h2> <h3>Swiftformer</h3> <p>The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called ‘SwiftFormer’ is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2.</p> <ul> <li>Add swiftformer by <a href="https://github.com/shehanmunasinghe"><code>@​shehanmunasinghe</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22686">#22686</a></li> </ul> <h3>Autoformer</h3> <p>This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.</p> <ul> <li>[Time-Series] Autoformer model by <a href="https://github.com/elisim"><code>@​elisim</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/21891">#21891</a></li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/huggingface/transformers/commit/fe861e578f50dc9c06de33cd361d2f625017e624"><code>fe861e5</code></a> [<code>GPT2</code>] Add correct keys on <code>_keys_to_ignore_on_load_unexpected</code> on all chil...</li> <li><a href="https://github.com/huggingface/transformers/commit/b3e27a80578d022301611363b890107244e12354"><code>b3e27a8</code></a> Update the pin on Accelerate (<a href="https://redirect.github.com/huggingface/transformers/issues/24110">#24110</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/53e1f5cf66d320b9c809f3940c707b6fef435d2d"><code>53e1f5c</code></a> [<code>Trainer</code>] Correct behavior of <code>_load_best_model</code> for PEFT models (<a href="https://redirect.github.com/huggingface/transformers/issues/24103">#24103</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/17db177714b03103bb94cd71b7dd414bc63bffd5"><code>17db177</code></a> reset accelerate env variables after each test (<a href="https://redirect.github.com/huggingface/transformers/issues/24107">#24107</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/905892f09027cab690918c7766fea1bb51bcdd26"><code>905892f</code></a> Release: v4.30.0</li> <li><a href="https://github.com/huggingface/transformers/commit/c3572e6bfba13ce6dc3fedb05cd1a946ea109576"><code>c3572e6</code></a> Add AzureOpenAiAgent (<a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/5eb3d3c7023ed0522d3c743ee2e13d896a3aa788"><code>5eb3d3c</code></a> Up pinned accelerate version (<a href="https://redirect.github.com/huggingface/transformers/issues/24089">#24089</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/d1c039e39864a41f6eb8b770a65f123c40164ea5"><code>d1c039e</code></a> fix accelerator prepare during eval only mode (<a href="https://redirect.github.com/huggingface/transformers/issues/24014">#24014</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/2c887cf8e0cb1ac96d28361ff3235a77f83c36ee"><code>2c887cf</code></a> Do not prepare lr scheduler as it as the right number of steps (<a href="https://redirect.github.com/huggingface/transformers/issues/24088">#24088</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/12298cb65c7e9d615b749dde935a0b4966f4ae49"><code>12298cb</code></a> fix executable batch size issue (<a href="https://redirect.github.com/huggingface/transformers/issues/24067">#24067</a>)</li> <li>Additional commits viewable in <a href="https://github.com/huggingface/transformers/compare/v3.5.1...v4.30.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=transformers&package-manager=pip&previous-version=3.5.1&new-version=4.30.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24215/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24215/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24215", "html_url": "https://github.com/huggingface/transformers/pull/24215", "diff_url": "https://github.com/huggingface/transformers/pull/24215.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24215.patch", "merged_at": null }