url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
โ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
โ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/23403
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23403/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23403/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23403/events
|
https://github.com/huggingface/transformers/pull/23403
| 1,712,217,589 |
PR_kwDOCUB6oc5QoEkN
| 23,403 |
Generate: add test to check KV format
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
MEMBER
| null |
# What does this PR do?
This PR adds a test to ensure our `.generate()`-compatible models have a standard KV cache format. Advanced generation methods (e.g. contrastive search or assisted generation) rely on cache manipulation, so it quickly becomes unmanageable if we don't stick to a conventional format (or a set of conventional formats).
I expect that future non-standard KV formats will have to be well justified in PRs, since it will now imply skipping this test.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23403/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23403",
"html_url": "https://github.com/huggingface/transformers/pull/23403",
"diff_url": "https://github.com/huggingface/transformers/pull/23403.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23403.patch",
"merged_at": 1684261700000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23402
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23402/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23402/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23402/events
|
https://github.com/huggingface/transformers/pull/23402
| 1,712,111,071 |
PR_kwDOCUB6oc5QnteB
| 23,402 |
Update `ConvNextV2ModelIntegrationTest::test_inference_image_classification_head`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
Required as this test is currently failing after PR #23122
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23402/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23402/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23402",
"html_url": "https://github.com/huggingface/transformers/pull/23402",
"diff_url": "https://github.com/huggingface/transformers/pull/23402.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23402.patch",
"merged_at": 1684272912000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23401
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23401/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23401/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23401/events
|
https://github.com/huggingface/transformers/issues/23401
| 1,712,089,251 |
I_kwDOCUB6oc5mDGij
| 23,401 |
batch generation with Llama: IndexError: index out of range in self
|
{
"login": "arian-askari",
"id": 9359629,
"node_id": "MDQ6VXNlcjkzNTk2Mjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9359629?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arian-askari",
"html_url": "https://github.com/arian-askari",
"followers_url": "https://api.github.com/users/arian-askari/followers",
"following_url": "https://api.github.com/users/arian-askari/following{/other_user}",
"gists_url": "https://api.github.com/users/arian-askari/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arian-askari/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arian-askari/subscriptions",
"organizations_url": "https://api.github.com/users/arian-askari/orgs",
"repos_url": "https://api.github.com/users/arian-askari/repos",
"events_url": "https://api.github.com/users/arian-askari/events{/privacy}",
"received_events_url": "https://api.github.com/users/arian-askari/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey @arian-askari ๐ \r\n\r\nThe exception pops up because you are defining a new token (`[PAD]`), which causes the exception in the embedding layer (it doesn't know the embeddings for the new token until you define them). \r\n\r\nMost decoder-only models have the same \"issue\" where the padding token is not defined. The standard workaround is as follows:\r\n```py\r\nfrom transformers import AutoTokenizer, LlamaForCausalLM\r\nmodel_name = \"huggyllama/llama-7b\"\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model_name, padding_side=\"left\")\r\nmodel = LlamaForCausalLM.from_pretrained(model_name)\r\ntokenizer.pad_token = tokenizer.eos_token\r\nmodel.generation_config.pad_token_id = model.generation_config.eos_token_id\r\n\r\nprompt = \"Hey, are you consciours? Can you talk to me?\"\r\ninputs = tokenizer([prompt, prompt + \" blah blah\"], return_tensors=\"pt\", padding=True, truncation=True)\r\n\r\n# Generate\r\ngenerate_ids = model.generate(inputs.input_ids, max_length=30)\r\ntokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]\r\n```\r\n\r\nPlease note that I added `padding_side=\"left\"` on the tokenizer -- it is critical for decoder-only models like Llama!",
"Hey @gante,\r\n\r\nThanks a lot! It got fixed with the suggested modification. ",
"@gante, why need set padding_side=\"left\" for decoder-only models?",
"@akk-123 these models predict the next token at any given point of the sequence, using the embedding of the latest token as a critical input. If your latest token is a pad token and/or if it is masked by the attention mask, your next token will be unrelated to the sequence -- mostly because the models are not trained to handle this case.\r\n\r\nLeft-padding ensures the phenomenon above doesn't occur.",
"@gante thanks a lot! but it seems origin llama model padding side is 'right' ?",
"Yes -- at train time, you want right-padded sequences. ",
"I am confuse about it, you mean at train time, we need set right-padding, at inference time, we should set left-padding? what's more, we will set attention mask when padding, maybe attention mask will avoid the problem you mentioned?\r\n```\r\nthese models predict the next token at any given point of the sequence, using the embedding of the latest token as a critical input. If your latest token is a pad token and/or if it is masked by the attention mask, your next token will be unrelated to the sequence -- mostly because the models are not trained to handle this case.\r\n```\r\n",
"The attention mask will not solve it, you need left-padding at generation time.\r\n\r\nThere's nothing like playing with the model to understand what would happen :)",
"> Hey @arian-askari ๐\r\n> \r\n> The exception pops up because you are defining a new token (`[PAD]`), which causes the exception in the embedding layer (it doesn't know the embeddings for the new token until you define them).\r\n> \r\n> Most decoder-only models have the same \"issue\" where the padding token is not defined. The standard workaround is as follows:\r\n> \r\n> ```python\r\n> from transformers import AutoTokenizer, LlamaForCausalLM\r\n> model_name = \"huggyllama/llama-7b\"\r\n> \r\n> tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side=\"left\")\r\n> model = LlamaForCausalLM.from_pretrained(model_name)\r\n> tokenizer.pad_token = tokenizer.eos_token\r\n> model.generation_config.pad_token_id = model.generation_config.eos_token_id\r\n> \r\n> prompt = \"Hey, are you consciours? Can you talk to me?\"\r\n> inputs = tokenizer([prompt, prompt + \" blah blah\"], return_tensors=\"pt\", padding=True, truncation=True)\r\n> \r\n> # Generate\r\n> generate_ids = model.generate(inputs.input_ids, max_length=30)\r\n> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]\r\n> ```\r\n> \r\n> Please note that I added `padding_side=\"left\"` on the tokenizer -- it is critical for decoder-only models like Llama!\r\n\r\nWhen using left padding, do we need to set the mask matrix for the left padding, or is there an automatic mask mechanism inside ?\r\n",
"@renmengjie7 the masking mechanism is the same, the only difference is the mask that comes out of the tokenizer (`inputs.attention_mask` in the snippet above) :) ",
"@gante got it ! Thank you very much."
] | 1,684 | 1,687 | 1,684 |
NONE
| null |
### System Info
I am using Cuda 11.6, and latest version of transformer which is 4.29.1 to the best of my knowledge.
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am facing the below error with batch generation:
```
from transformers import AutoTokenizer, LlamaForCausalLM
model_name = "huggyllama/llama-7b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = LlamaForCausalLM.from_pretrained(model_name)
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
prompt = "Hey, are you consciours? Can you talk to me?"
inputs = tokenizer([prompt, prompt + " blah blah"], return_tensors="pt", padding=True, truncation=True)
# Generate
generate_ids = model.generate(inputs.input_ids, max_length=30)
tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
```
### Expected behavior
The above code works if the length of sentences in the batch be equal. E.g., if you initiate inputs variable with the below command then everything works perfectly:
`inputs = tokenizer([prompt, prompt], return_tensors="pt", padding=True, truncation=True)`
Here is the error:
`
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ in <module>:1 โ
โ โ
โ /data/arianaskariaa/conda_envs/envs/transformer_4_29_1_efficientLlaMa_int8_python_env/lib/python โ
โ 3.8/site-packages/torch/utils/_contextlib.py:115 in decorate_context โ
โ โ
โ 112 โ @functools.wraps(func) โ
โ 113 โ def decorate_context(*args, **kwargs): โ
โ 114 โ โ with ctx_factory(): โ
โ โฑ 115 โ โ โ return func(*args, **kwargs) โ
โ 116 โ โ
โ 117 โ return decorate_context โ
โ 118 โ
โ โ
โ /data/arianaskariaa/conda_envs/envs/transformer_4_29_1_efficientLlaMa_int8_python_env/lib/python โ
โ 3.8/site-packages/transformers/generation/utils.py:1515 in generate โ
โ โ
โ 1512 โ โ โ โ ) โ
โ 1513 โ โ โ โ
โ 1514 โ โ โ # 11. run greedy search โ
โ โฑ 1515 โ โ โ return self.greedy_search( โ
โ 1516 โ โ โ โ input_ids, โ
โ 1517 โ โ โ โ logits_processor=logits_processor, โ
โ 1518 โ โ โ โ stopping_criteria=stopping_criteria, โ
โ โ
โ /data/arianaskariaa/conda_envs/envs/transformer_4_29_1_efficientLlaMa_int8_python_env/lib/python โ
โ 3.8/site-packages/transformers/generation/utils.py:2332 in greedy_search โ
โ โ
โ 2329 โ โ โ model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) โ
โ 2330 โ โ โ โ
โ 2331 โ โ โ # forward pass to get next token โ
โ โฑ 2332 โ โ โ outputs = self( โ
โ 2333 โ โ โ โ **model_inputs, โ
โ 2334 โ โ โ โ return_dict=True, โ
โ 2335 โ โ โ โ output_attentions=output_attentions, โ
โ โ
โ /data/arianaskariaa/conda_envs/envs/transformer_4_29_1_efficientLlaMa_int8_python_env/lib/python โ
โ 3.8/site-packages/torch/nn/modules/module.py:1501 in _call_impl โ
โ โ
โ 1498 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ
โ 1499 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ
โ 1500 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ
โ โฑ 1501 โ โ โ return forward_call(*args, **kwargs) โ
โ 1502 โ โ # Do not call functions when jit is used โ
โ 1503 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ
โ 1504 โ โ backward_pre_hooks = [] โ
โ โ
โ /data/arianaskariaa/conda_envs/envs/transformer_4_29_1_efficientLlaMa_int8_python_env/lib/python โ
โ 3.8/site-packages/transformers/models/llama/modeling_llama.py:688 in forward โ
โ โ
โ 685 โ โ return_dict = return_dict if return_dict is not None else self.config.use_return โ
โ 686 โ โ โ
โ 687 โ โ # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn) โ
โ โฑ 688 โ โ outputs = self.model( โ
โ 689 โ โ โ input_ids=input_ids, โ
โ 690 โ โ โ attention_mask=attention_mask, โ
โ 691 โ โ โ position_ids=position_ids, โ
โ โ
โ /data/arianaskariaa/conda_envs/envs/transformer_4_29_1_efficientLlaMa_int8_python_env/lib/python โ
โ 3.8/site-packages/torch/nn/modules/module.py:1501 in _call_impl โ
โ โ
โ 1498 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ
โ 1499 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ
โ 1500 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ
โ โฑ 1501 โ โ โ return forward_call(*args, **kwargs) โ
โ 1502 โ โ # Do not call functions when jit is used โ
โ 1503 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ
โ 1504 โ โ backward_pre_hooks = [] โ
โ โ
โ /data/arianaskariaa/conda_envs/envs/transformer_4_29_1_efficientLlaMa_int8_python_env/lib/python โ
โ 3.8/site-packages/transformers/models/llama/modeling_llama.py:531 in forward โ
โ โ
โ 528 โ โ โ position_ids = position_ids.view(-1, seq_length).long() โ
โ 529 โ โ โ
โ 530 โ โ if inputs_embeds is None: โ
โ โฑ 531 โ โ โ inputs_embeds = self.embed_tokens(input_ids) โ
โ 532 โ โ # embed positions โ
โ 533 โ โ if attention_mask is None: โ
โ 534 โ โ โ attention_mask = torch.ones( โ
โ โ
โ /data/arianaskariaa/conda_envs/envs/transformer_4_29_1_efficientLlaMa_int8_python_env/lib/python โ
โ 3.8/site-packages/torch/nn/modules/module.py:1501 in _call_impl โ
โ โ
โ 1498 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ
โ 1499 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ
โ 1500 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ
โ โฑ 1501 โ โ โ return forward_call(*args, **kwargs) โ
โ 1502 โ โ # Do not call functions when jit is used โ
โ 1503 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ
โ 1504 โ โ backward_pre_hooks = [] โ
โ โ
โ /data/arianaskariaa/conda_envs/envs/transformer_4_29_1_efficientLlaMa_int8_python_env/lib/python โ
โ 3.8/site-packages/torch/nn/modules/sparse.py:162 in forward โ
โ โ
โ 159 โ โ โ โ self.weight[self.padding_idx].fill_(0) โ
โ 160 โ โ
โ 161 โ def forward(self, input: Tensor) -> Tensor: โ
โ โฑ 162 โ โ return F.embedding( โ
โ 163 โ โ โ input, self.weight, self.padding_idx, self.max_norm, โ
โ 164 โ โ โ self.norm_type, self.scale_grad_by_freq, self.sparse) โ
โ 165 โ
โ โ
โ /data/arianaskariaa/conda_envs/envs/transformer_4_29_1_efficientLlaMa_int8_python_env/lib/python โ
โ 3.8/site-packages/torch/nn/functional.py:2210 in embedding โ
โ โ
โ 2207 โ โ # torch.embedding_renorm_ โ
โ 2208 โ โ # remove once script supports set_grad_enabled โ
โ 2209 โ โ _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) โ
โ โฑ 2210 โ return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) โ
โ 2211 โ
โ 2212 โ
โ 2213 def embedding_bag( โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
IndexError: index out of range in self
`
Thanks in advance!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23401/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23401/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23400
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23400/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23400/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23400/events
|
https://github.com/huggingface/transformers/issues/23400
| 1,712,004,549 |
I_kwDOCUB6oc5mCx3F
| 23,400 |
The test `LlamaIntegrationTest::test_conversion` test is failing
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"~I looked into it.~\r\n\r\n~The difference is that the newly converted tokenizer has ids 32000-32004 as special ids which correspond if I'm not mistaken to OpenAssistant llama fork.~\r\n\r\n~Those do not seem to be declared here: https://huggingface.co/hf-internal-testing/llama-tokenizer/tree/main~\r\n\r\n~I'm not sure which part of the code adds them to the slow tokenizer, but this seems indeed like a bug.~\r\n\r\nLooked at the wrong file. Everything works it's only a different `type_id` in the post processor.\r\n\r\nWe simply need to update the tokenizer.json on the hub with the correct value (1)",
"(There's also a slight issue with the EOS token being added into the processor for no reason.\r\n",
"https://huggingface.co/hf-internal-testing/llama-tokenizer/discussions/3\r\n\r\nGoes along with\r\n\r\nhttps://github.com/huggingface/transformers/issues/23400",
"Confirmed it works!"
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
The following command
```bash
RUN_SLOW=1 python3 -m pytest -v tests/models/llama/test_tokenization_llama.py::LlamaIntegrationTest::test_conversion
```
gives
```bash
> self.assertEqual(old_serialized, new_serialized)
E AssertionError: '{\n [1465 chars] "Sequence": {\n "id": "B",\n [1794589 chars]}\n}' != '{\n [1465 chars] "SpecialToken": {\n "id": "<s>",\n[1794837 chars]}\n}'
tests/models/llama/test_tokenization_llama.py:337: AssertionError
```
### Who can help?
@ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23400/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23399
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23399/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23399/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23399/events
|
https://github.com/huggingface/transformers/pull/23399
| 1,712,000,200 |
PR_kwDOCUB6oc5QnVTZ
| 23,399 |
[`Pix2Struct`] Add conditional generation on docstring example
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
As discussed in https://github.com/huggingface/transformers/pull/23391#issuecomment-1549555750 - this PR adds an example for users to run conditional generation using pix2struct. In fact, users shouldn't add special tokens when pre-pending the text - therefore it should be explicitly mentioned in the docs (done on the aformentioned PR) but also on the example snippets.
cc @amyeroberts
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23399/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23399/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23399",
"html_url": "https://github.com/huggingface/transformers/pull/23399",
"diff_url": "https://github.com/huggingface/transformers/pull/23399.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23399.patch",
"merged_at": 1684245559000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23398
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23398/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23398/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23398/events
|
https://github.com/huggingface/transformers/pull/23398
| 1,711,995,620 |
PR_kwDOCUB6oc5QnURI
| 23,398 |
Generate: faster `can_generate` check on TF and Flax
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
MEMBER
| null |
# What does this PR do?
Same as https://github.com/huggingface/transformers/pull/22643, but on TF and Flax.
[This comment](https://github.com/huggingface/transformers/pull/22643#issuecomment-1501033074) shows that it reduces the exec time of this line from 1-500 ms to <0.01 ms
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23398/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23398/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23398",
"html_url": "https://github.com/huggingface/transformers/pull/23398",
"diff_url": "https://github.com/huggingface/transformers/pull/23398.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23398.patch",
"merged_at": 1684246342000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23397
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23397/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23397/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23397/events
|
https://github.com/huggingface/transformers/pull/23397
| 1,711,970,294 |
PR_kwDOCUB6oc5QnOwe
| 23,397 |
Docs: add link to assisted generation blog post
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
MEMBER
| null |
# What does this PR do?
(see PR title)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23397/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23397/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23397",
"html_url": "https://github.com/huggingface/transformers/pull/23397",
"diff_url": "https://github.com/huggingface/transformers/pull/23397.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23397.patch",
"merged_at": 1684259675000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23396
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23396/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23396/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23396/events
|
https://github.com/huggingface/transformers/issues/23396
| 1,711,926,848 |
I_kwDOCUB6oc5mCe5A
| 23,396 |
trainer.train(resume_from_checkpoint=True) failed
|
{
"login": "John-Lin98",
"id": 65805703,
"node_id": "MDQ6VXNlcjY1ODA1NzAz",
"avatar_url": "https://avatars.githubusercontent.com/u/65805703?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/John-Lin98",
"html_url": "https://github.com/John-Lin98",
"followers_url": "https://api.github.com/users/John-Lin98/followers",
"following_url": "https://api.github.com/users/John-Lin98/following{/other_user}",
"gists_url": "https://api.github.com/users/John-Lin98/gists{/gist_id}",
"starred_url": "https://api.github.com/users/John-Lin98/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/John-Lin98/subscriptions",
"organizations_url": "https://api.github.com/users/John-Lin98/orgs",
"repos_url": "https://api.github.com/users/John-Lin98/repos",
"events_url": "https://api.github.com/users/John-Lin98/events{/privacy}",
"received_events_url": "https://api.github.com/users/John-Lin98/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The saved checkpoint is corrupted somehow. I don't know what could be the reason for it since I don't know how it was saved in the first place.",
"sorry for not showing the training parameters earlier. In fact, I used the trainer's automatic checkpoint saving method based on the number of steps. Here, it is set to save every 1200 steps๏ผ\r\n\r\n\r\nHere is the directory where I save my checkpoints๏ผ\r\n\r\n\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,687 | 1,687 |
NONE
| null |
### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.28.1
- Platform: Linux-4.15.0-209-generic-x86_64-with-glibc2.27
- Python version: 3.10.10
- Huggingface_hub version: 0.13.3
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger
When I call trainer.train() to continue training a llama-7B model from a checkpoint, I encounter the following issue:



And I'm not sure why this problem is occurring. Here is the code I'm running:

### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
RuntimeError: Trying to resize storage that is not resizable
Here is the code I'm running:
```
def train():
global local_rank
parser = transformers.HfArgumentParser(
(ModelArguments, DataArguments, TrainingArguments))
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
local_rank = training_args.local_rank
model = transformers.AutoModelForCausalLM.from_pretrained(
model_args.model_name_or_path,
cache_dir=training_args.cache_dir,
).half()
tokenizer = transformers.AutoTokenizer.from_pretrained(
model_args.model_name_or_path,
cache_dir=training_args.cache_dir,
model_max_length=training_args.model_max_length,
padding_side="right",
use_fast=False,
)
tokenizer.pad_token = tokenizer.unk_token
data_module = make_supervised_data_module(tokenizer=tokenizer,
data_args=data_args)
trainer = Trainer(model=model,
tokenizer=tokenizer,
args=training_args,
**data_module)
if list(pathlib.Path(training_args.output_dir).glob("checkpoint-*")):
trainer.train(resume_from_checkpoint=True)
else:
trainer.train()
trainer.save_state()
trainer.save_model(training_args.output_dir)
```
### Expected behavior
I don't encounter any checkpoint import error when I continue training from a checkpoint.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23396/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23396/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23395
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23395/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23395/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23395/events
|
https://github.com/huggingface/transformers/issues/23395
| 1,711,834,952 |
I_kwDOCUB6oc5mCIdI
| 23,395 |
Unable to import graphormer from transformers
|
{
"login": "techthiyanes",
"id": 25921035,
"node_id": "MDQ6VXNlcjI1OTIxMDM1",
"avatar_url": "https://avatars.githubusercontent.com/u/25921035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/techthiyanes",
"html_url": "https://github.com/techthiyanes",
"followers_url": "https://api.github.com/users/techthiyanes/followers",
"following_url": "https://api.github.com/users/techthiyanes/following{/other_user}",
"gists_url": "https://api.github.com/users/techthiyanes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/techthiyanes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/techthiyanes/subscriptions",
"organizations_url": "https://api.github.com/users/techthiyanes/orgs",
"repos_url": "https://api.github.com/users/techthiyanes/repos",
"events_url": "https://api.github.com/users/techthiyanes/events{/privacy}",
"received_events_url": "https://api.github.com/users/techthiyanes/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"By importing cython file manually, fixed the issue. Thanks a lot.",
"> \r\n\r\ni got the same queestion,but only\r\n\r\n ```\r\nfrom transformers.models.graphormer.collating_graphormer import preprocess_item, GraphormerDataCollator\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/root/miniconda3/envs/g/lib/python3.8/site-packages/transformers/models/graphormer/collating_graphormer.py\", line 16, in <module>\r\n from . import algos_graphormer # noqa E402\r\nImportError: cannot import name 'algos_graphormer' from 'transformers.models.graphormer' (/root/miniconda3/envs/g/lib/python3.8/site-packages/transformers/models/graphormer/__init__.py)\r\n```\r\ncould u please tell me about the details of your solution. where is the cython file? ",
"Kindly copy the below cython file inside your transformer installed whl folder.(Under src/transformers/models/graphormer)\r\n\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/graphormer/algos_graphormer.pyx\r\n",
"> Kindly copy the below cython file inside your transformer installed whl folder.(Under src/transformers/models/graphormer)\r\n> \r\n> https://github.com/huggingface/transformers/blob/main/src/transformers/models/graphormer/algos_graphormer.pyx\r\n\r\nthanks for you replay.\r\n\r\ni found this file in my \r\n\r\n`/miniconda3/envs/payldet/lib/python3.9/site-packages/transformers/models/graphormer` folder:\r\n\r\n```\r\n/miniconda3/envs/payldet/lib/python3.9/site-packages/transformers/models/graphormer# ls\r\n__init__.py __pycache__ algos_graphormer.c algos_graphormer.pyx collating_graphormer.py configuration_graphormer.py modeling_graphormer.py\r\n```\r\n\r\ndo i need replace this file?\r\n\r\ni try to compile this pyx file manually,but meet fatal error now:\r\n\r\n`fatal error: numpy/arrayobject.h: No such file or directory` \r\n\r\n",
"> i\r\n\r\nI would request you to stage this .pyx file inside graphormer folder. ( The place where transformer gets installed) - Most likely under /usr/local/python<version>/.....",
"> \r\nThanks for your reply.\r\n\r\n I installed Transformer in a Conda environment, so I don't have the path you replied. However, I finally resolved this question in another way. \r\n\r\nBy compiling this Pyx file manually, I got a .so file. To import it manually, I changed `transformers/models/graphormer/configuration_graphormer.py` file and added the specific path.\r\n\r\n```\r\n# line 15\r\n# before:\r\nif is_cython_available():\r\n import pyximport\r\n\r\n pyximport.install(setup_args={\"include_dirs\": np.get_include()})\r\n from . import algos_graphormer # noqa E402\r\n\r\n# after:\r\nif is_cython_available():\r\n import pyximport\r\n\r\n pyximport.install(setup_args={\"include_dirs\": np.get_include()})\r\n import sys\r\n sys.path.append('/path/to/.so file')\r\n import algos_graphormer\r\n\r\n```\r\n\r\nSuccessfully ran model.py. But I'm not sure this way will not have a bad influence in the future.\r\n\r\n```\r\n# just test\r\nfrom datasets import load_dataset \r\nfrom datasets import load_metric\r\nimport evaluate\r\nimport cython\r\nfrom transformers.models.graphormer.collating_graphormer import preprocess_item, GraphormerDataCollator\r\n\r\ndataset = load_dataset(\"OGB/ogbg-molhiv\")\r\nmetric = evaluate.load(\"accuracy\")\r\n# print(dataset[\"train\"].features)\r\n\r\ndataset_processed = dataset.map(preprocess_item, batched=False)\r\n# split up training into training + validation\r\ntrain_ds = dataset_processed['train']\r\nval_ds = dataset_processed['validation']\r\n\r\nprint(train_ds[0].keys())\r\n```\r\n and the result is :\r\n\r\n```\r\npython model.py \r\nFound cached dataset json (/root/.cache/huggingface/datasets/OGB___json/OGB--ogbg-molhiv-8591baabc5d95f2f/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4)\r\n100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 3/3 [00:00<00:00, 557.48it/s]\r\ndict_keys(['edge_index', 'edge_attr', 'y', 'num_nodes', 'node_feat', 'input_nodes', 'attn_bias', 'attn_edge_type', 'spatial_pos', 'in_degree', 'out_degree', 'input_edges', 'labels'])\r\n```",
"> mport sys\r\n\r\n\r\n\r\n\r\nIf you are using colab, Kindly stage your cython file at /usr/local/lib/python3.10/dist-packages/transformers/models/graphormer/algos_graphormer.pyx\r\nThanks for sharing the solution. After completing the training, would you able to write prediction code? ( without the influence of trainer.predict()).\r\n\r\nBy compiling this Pyx file manually, --> Did you create this file manually executing? May I know how you created this file? I just exported this pyx file and just used same as it's.",
"> Thanks for sharing the solution. After completing the training, would you able to write prediction code? ( without the influence of trainer.predict()).\r\n\r\nHi,\r\n\r\nI do not think I should put this pyx file in the `/usr/local/lib/python3.10/dist-packages/transformers/models/graphormer/` folder because I am running this code in a Conda environment, not on Colab.\r\n\r\nRegarding the prediction, I have to tell you that I am a novice in Graphormer (I downloaded it yesterday actually) and have not written any code about it. However, my main task is to get graph embeddings, so I'll try to finish it. Maybe we can do it together and learn Graphormer together.\r\n\r\nHow do I compile it? This is the code,named setup.py:\r\n\r\n```\r\nfrom setuptools import Extension, setup\r\nimport numpy\r\n\r\next_modules = [\r\n Extension(\r\n name='example',\r\n sources=['example.pyx'],\r\n include_dirs=[numpy.get_include()]\r\n )\r\n]\r\n\r\nsetup(\r\n name='example',\r\n ext_modules=ext_modules,\r\n)\r\n```\r\n\r\nafter execute `python setup.py build_ext --inplace` ,you will get the .so file.",
"> > Thanks for sharing the solution. After completing the training, would you able to write prediction code? ( without the influence of trainer.predict()).\r\n> \r\n> Hi,\r\n> \r\n> I do not think I should put this pyx file in the `/usr/local/lib/python3.10/dist-packages/transformers/models/graphormer/` folder because I am running this code in a Conda environment, not on Colab.\r\n> \r\n> Regarding the prediction, I have to tell you that I am a novice in Graphormer (I downloaded it yesterday actually) and have not written any code about it. However, my main task is to get graph embeddings, so I'll try to finish it. Maybe we can do it together and learn Graphormer together.\r\n> \r\n> How do I compile it? This is the code,named setup.py:\r\n> \r\n> ```\r\n> from setuptools import Extension, setup\r\n> import numpy\r\n> \r\n> ext_modules = [\r\n> Extension(\r\n> name='example',\r\n> sources=['example.pyx'],\r\n> include_dirs=[numpy.get_include()]\r\n> )\r\n> ]\r\n> \r\n> setup(\r\n> name='example',\r\n> ext_modules=ext_modules,\r\n> )\r\n> ```\r\n> \r\n> after execute `python setup.py build_ext --inplace` ,you will get the .so file.\r\n\r\n\r\n\r\nPlease check this one for prediction code : https://github.com/huggingface/transformers/issues/23642",
"@techthiyanes Hi,Thank you for sharing. I'm going to check it out now."
] | 1,684 | 1,684 | 1,684 |
NONE
| null |
### System Info
transformers : 4.29.1
Python : 3.10
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers.models.graphormer.collating_graphormer import preprocess_item, GraphormerDataCollator
In the above import command the system is unable to import algos_graphormer from cython (pyx) file.
The below error message is popping up.
ImportError: cannot import name 'algos_graphormer' from 'transformers.models.graphormer' (/usr/local/lib/python3.10/dist-packages/transformers/models/graphormer/__init__.py)
### Expected behavior
It needs to be imported without any errors.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23395/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23395/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23394
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23394/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23394/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23394/events
|
https://github.com/huggingface/transformers/pull/23394
| 1,711,773,521 |
PR_kwDOCUB6oc5Qmj1h
| 23,394 |
README: Fix affiliation for MEGA
|
{
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
MEMBER
| null |
[discussed in this thread](https://twitter.com/gneubig/status/1658199635457101825)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23394/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23394",
"html_url": "https://github.com/huggingface/transformers/pull/23394",
"diff_url": "https://github.com/huggingface/transformers/pull/23394.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23394.patch",
"merged_at": 1684486988000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23393
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23393/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23393/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23393/events
|
https://github.com/huggingface/transformers/issues/23393
| 1,711,668,375 |
I_kwDOCUB6oc5mBfyX
| 23,393 |
is it possible to add `system prompt` to Blenderbot ?
|
{
"login": "SKbarbon",
"id": 86029286,
"node_id": "MDQ6VXNlcjg2MDI5Mjg2",
"avatar_url": "https://avatars.githubusercontent.com/u/86029286?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SKbarbon",
"html_url": "https://github.com/SKbarbon",
"followers_url": "https://api.github.com/users/SKbarbon/followers",
"following_url": "https://api.github.com/users/SKbarbon/following{/other_user}",
"gists_url": "https://api.github.com/users/SKbarbon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SKbarbon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SKbarbon/subscriptions",
"organizations_url": "https://api.github.com/users/SKbarbon/orgs",
"repos_url": "https://api.github.com/users/SKbarbon/repos",
"events_url": "https://api.github.com/users/SKbarbon/events{/privacy}",
"received_events_url": "https://api.github.com/users/SKbarbon/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @SKbarbon, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.",
"Ah I am sorry @amyeroberts !"
] | 1,684 | 1,684 | 1,684 |
NONE
| null |
This is a simple `BlenderBot` app:
```python
from transformers import BlenderbotSmallTokenizer, BlenderbotSmallForConditionalGeneration
import os
class BlenderBot:
def __init__(
self,
model_name: str ='facebook/blenderbot_small-90M',
):
if not os.path.exists('./models/blenderbot'):
BlenderbotSmallForConditionalGeneration.from_pretrained(model_name).save_pretrained('./models/blenderbot')
BlenderbotSmallTokenizer.from_pretrained(model_name).save_pretrained('./models/blenderbot')
self.model = BlenderbotSmallForConditionalGeneration.from_pretrained('./models/blenderbot')
self.tokenizer = BlenderbotSmallTokenizer.from_pretrained('./models/blenderbot')
def __call__(self, inputs: str) -> str:
inputs_tokenized = self.tokenizer(inputs, return_tensors='pt')
reply_ids = self.model.generate(**inputs_tokenized)
reply = self.tokenizer.batch_decode(reply_ids, skip_special_tokens=True)[0]
return reply
def run(self):
while True:
user_input = input("User: ")
print("Bot:", self(user_input))
```
The problem is i dont know how to add any system prompts to manage the out puts of the chatbot. Any help with that ?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23393/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23393/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23392
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23392/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23392/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23392/events
|
https://github.com/huggingface/transformers/pull/23392
| 1,711,666,783 |
PR_kwDOCUB6oc5QmMVw
| 23,392 |
Fix `RwkvModel`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
The convention is to filter out the `None` value from the output tuple.
And without this, torchscript tests fail as it doesn't like `None` value.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23392/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23392/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23392",
"html_url": "https://github.com/huggingface/transformers/pull/23392",
"diff_url": "https://github.com/huggingface/transformers/pull/23392.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23392.patch",
"merged_at": 1684232095000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23391
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23391/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23391/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23391/events
|
https://github.com/huggingface/transformers/pull/23391
| 1,711,631,086 |
PR_kwDOCUB6oc5QmEmG
| 23,391 |
Update `test_batched_inference_image_captioning_conditioned`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@younesbelkada If you can take over this PR to avoid `generate weird output right after`, it would be really nice :-)",
"The test should be now fixed, the generated text produces different output than before, probably due to https://github.com/huggingface/transformers/pull/23051 that now made the model using a causal attention mask on the text decoder (which was not the case before)",
"Thanks a lot!",
"> I think that we should educate users that for text-conditioned generation we should never add special tokens to the tokenizer - as introduced in https://github.com/huggingface/transformers/pull/23004\r\n\r\n@younesbelkada The best place to do this I think is in the example docstring for the model, as this is what a lot of users will reference, and it currently doesn't do that. Could you open a PR to update this? ",
"> @younesbelkada The best place to do this I think is in the example docstring for the model, as this is what a lot of users will reference, and it currently doesn't do that. Could you open a PR to update this?\r\n\r\nSure yes will do! \r\n\r\n> Changes look fine - my only concern is that the generations appear to have become worse. @younesbelkada @ydshieh do we have any other generation samples to make sure the model is behaving as expected?\r\n\r\nYes! I was relieved since we do have the tests `test_batched_inference_image_captioning` & `test_inference_image_captioning` that still pass --> meaning that the un-conditional text generation seem to be unaffected!",
"> > @younesbelkada The best place to do this I think is in the example docstring for the model, as this is what a lot of users will reference, and it currently doesn't do that. Could you open a PR to update this?\r\n> \r\n> Sure yes will do!\r\n\r\nI am going to merge this PR and leave @amyeroberts 's suggestion for @younesbelkada in a separate PR. Thank you for the review and the refine of this PR. "
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
The test `tests/models/pix2struct/test_modeling_pix2struct.py::Pix2StructIntegrationTest::test_batched_inference_image_captioning_conditioned` starts to fail on CI run of `April 27` which includes the merged PR #23023.
@younesbelkada Could you double check if the changes in this PR are reasonable? Thank you.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23391/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23391",
"html_url": "https://github.com/huggingface/transformers/pull/23391",
"diff_url": "https://github.com/huggingface/transformers/pull/23391.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23391.patch",
"merged_at": 1684241365000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23390
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23390/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23390/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23390/events
|
https://github.com/huggingface/transformers/issues/23390
| 1,711,613,886 |
I_kwDOCUB6oc5mBSe-
| 23,390 |
[Sagemaker] sagemaker distributed features in Trainer broken since Transformers 4.29
|
{
"login": "JingyaHuang",
"id": 44135271,
"node_id": "MDQ6VXNlcjQ0MTM1Mjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/44135271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JingyaHuang",
"html_url": "https://github.com/JingyaHuang",
"followers_url": "https://api.github.com/users/JingyaHuang/followers",
"following_url": "https://api.github.com/users/JingyaHuang/following{/other_user}",
"gists_url": "https://api.github.com/users/JingyaHuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JingyaHuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JingyaHuang/subscriptions",
"organizations_url": "https://api.github.com/users/JingyaHuang/orgs",
"repos_url": "https://api.github.com/users/JingyaHuang/repos",
"events_url": "https://api.github.com/users/JingyaHuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/JingyaHuang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @pacman100 ",
"cc @muellerzr as he has full insights on the related PR.",
"@JingyaHuang could you show the full stack trace so I can see where this error is specifically being raised from/its context?",
"I need any context of a trace that relates to something in transformers to be able to know where/how this is stemming from. SageMaker should not be handled by the current accelerate implementation, so it's critical to know just where that logic fault is :) ",
"@muellerzr Sure, here is the complete log: https://gist.github.com/JingyaHuang/9327adb701d9989da8cf4c33bfeb043e\r\n(sorry if it looks messy, it's populated by sagemaker. Related tests are test_smmp and test_smdp.py)\r\n",
"@JingyaHuang could you try installing transformers via `pip install git+https://github.com/huggingface/transformers@muellerzr-fix-sagemaker` and verify this fixes it? And if not, what other errors arise? Thanks!",
"Trying! It will take a while for the image build, will update asap.",
"@philschmid I thought SageMaker MP was broken since many releases ago?",
"> @philschmid I thought SageMaker MP was broken since many releases ago?\r\n\r\nIt was not broken, just not using all the features by default, e.g. Tensor Parallelism. ",
"Hey @muellerzr, here is the log that I got with your patch: https://gist.github.com/JingyaHuang/3b60725d0a6f22f377b27694d22c18ca\r\n\r\nThere seems to be another issue now:\r\n```\r\nTypeError: init_process_group() got multiple values for argument 'backend'\r\n``` ",
"@JingyaHuang what version of `accelerate` are you running with? That should be fixed in v0.19.0",
"@JingyaHuang there was a fix on main for accelerate that may be related to this, are we trying to spawn on cpu/do distributed CPU?",
"Thanks @muellerzr, yes the tests were running in a docker container with accelerate 0.19.0 installed\r\n```\r\nName: accelerate\r\nVersion: 0.19.0\r\nSummary: Accelerate\r\nHome-page: https://github.com/huggingface/accelerate\r\nAuthor: The HuggingFace team\r\nAuthor-email: [email protected]\r\nLicense: Apache\r\nLocation: /opt/conda/lib/python3.10/site-packages\r\nRequires: numpy, packaging, psutil, pyyaml, torch\r\nRequired-by: \r\n``` \r\nThe smmp test ran sagemaker distributed on GPU, I am not familiar with how sagemaker's distributed model parallel works. @philschmid might have a better answer, does it spawns CPU processes? ",
"@JingyaHuang looked at the trace again, yes it does make sense that the fix to main should have changed it actually based on what's happening here. Can you try one more time with `pip install git+https://github.com/huggingface/accelerate` to verify? Thanks ๐ ",
"(Auto-closed due to PR merging, will keep open until we know for sure w/ Accelerate fix :) )",
"Hey @muellerzr, here is the log that I got by the test with accelerate from source last week: https://gist.github.com/JingyaHuang/0026e8801e99d0df522fb2bcb2b2334c\r\n(I did not configure accelerate, not sure if I should do that?)",
"Thanks @JingyaHuang (and appreciate your patience).\r\n\r\nLet's try via the following:\r\n\r\n```bash\r\npip install git+https://github.com/huggingface/transformers@muellerzr-sagemaker-dp git+https://github.com/huggingface/accelerate@sagemakerdp\r\n```\r\nThanks!",
"@muellerzr No worries!\r\n\r\nAnother error occurs, we don't have the luck :( . Here is the log: https://gist.github.com/JingyaHuang/01de393f9da716ff094c248f12ec1465 ",
"@JingyaHuang I'd actually say we're making progress! New errors, with easier solutions :) Let's try again, same branches etc for everything",
"Boom!\r\n๐ \r\n\r\n",
"@muellerzr I just ran the sagemaker data parallel test but with question answering task(`run_qa.py` โ ) instead of text generation task(`run_glue.py` โ
) this time, and it failed during the evaluation (while doing post-processing of the predictions).\r\n\r\n```\r\nValueError: Got 676 predictions and 10784 features.\r\n``` \r\n\r\nFull tracing log here: https://gist.github.com/JingyaHuang/b824d3abd17c6db23e68968dec0cee13 \r\n\r\nDo you think it is related to the issue, or just an update need to be done for the examples? ",
"@JingyaHuang to know for sure, try running it with `transformers==4.28.1`",
"@muellerzr I just did two tests:\r\n\r\n* Run __qa__ example on __smmp__ test with __patched transformers & accelerate__ -> tests passed โ
\r\n\r\n\r\n(so with the patch, smmp test passed for both text classification and qa)\r\n\r\np.s. `run_qa` and `run_glue` are fetched from the main branch\r\n\r\n* Run __qa__ example on __smdp__ test with __trfrs 4.28.1 & accelerate 0.19.0__ -> tests passed โ
\r\n\r\n\r\np.s. `run_qa` and `run_glue` are fetched from the v4.28.1 branch\r\n\r\nAnd the previous error log on the qa task was for __smdp__ test, so it seems smmp is good but smdp is still broken for 4.29.*, is there anything that needs to be done for smdp during the prediction step maybe?",
"@muellerzr The number of features for evaluation is 10784. Since the smdp test was run on two `ml.p3.16xlarge`(8 gpus) instances and 10784 / 676 = 15.9526627, intuitively I doubt when using smdp only the predictions on one worker are kept (676). \r\n\r\nRef smdp test: https://github.com/aws/deep-learning-containers/blob/master/test/sagemaker_tests/huggingface_pytorch/training/integration/sagemaker/test_smdp.py#L108\r\n\r\njust a thought (ใแดใ)",
"Closing now via https://github.com/huggingface/transformers/pull/23681, as all tests pass"
] | 1,684 | 1,685 | 1,685 |
CONTRIBUTOR
| null |
### System Info
* `transformers: 4.29.1`
* `datasets: 2.12.0`
* `evaluate: 0.4.0`
* `accelerate: 0.19.0`
* `torch: 2.0.0`
* `diffusers: 0.16.1`
### Who can help?
@philschmid @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The issue is found while I update AWS Sagemaker deep learning container for Transformers-PyTorch. Sagemaker's distributed data parallel and model parallel features are broken since Transformers 4.29.* .
* Test scripts: [`test_smmp.py`](https://github.com/aws/deep-learning-containers/blob/master/test/sagemaker_tests/huggingface_pytorch/training/integration/sagemaker/test_smmp.py) and [`test_smdp.py`](https://github.com/aws/deep-learning-containers/blob/master/test/sagemaker_tests/huggingface_pytorch/training/integration/sagemaker/test_smdp.py).
* Related PR in the DLC repo: https://github.com/aws/deep-learning-containers/pull/2993
* Error log:
```
AttributeError: 'TrainingArguments' object has no attribute 'distributed_state'
```
<details close>
<summary>More detailed error log</summary>
<br>
```
_____________________________ test_smmp_gpu[gloo] ______________________________
ecr_image = '669063966089.dkr.ecr.us-west-2.amazonaws.com/pr-huggingface-pytorch-training:2.0.0-transformers4.29.1-gpu-py310-cu118-ubuntu20.04-pr-2993-2023-05-13-17-25-48'
sagemaker_regions = ['us-west-2', 'us-east-1', 'eu-west-1']
instance_type = 'ml.p3.8xlarge', framework_version = '2.0.0', py_version = 'py3'
dist_gpu_backend = 'gloo'
@pytest.mark.processor("gpu")
@pytest.mark.integration("smmp")
@pytest.mark.model("hf_qa_smmp")
@pytest.mark.skip_cpu
@pytest.mark.skip_py2_containers
@pytest.mark.skip_trcomp_containers
def test_smmp_gpu(
ecr_image, sagemaker_regions, instance_type, framework_version, py_version, dist_gpu_backend
):
> invoke_sm_helper_function(ecr_image, sagemaker_regions, _test_smmp_gpu_function, py_version, 1)
integration/sagemaker/test_smmp.py:76:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../__init__.py:119: in invoke_sm_helper_function
raise e
../../__init__.py:113: in invoke_sm_helper_function
test_function(tested_ecr_image, sagemaker_session, *test_function_args)
integration/sagemaker/test_smmp.py:117: in _test_smmp_gpu_function
huggingface_estimator.fit(job_name=sagemaker.utils.unique_name_from_base("test-hf-pt-qa-smmp"))
2.0.0-transformers4.29.1-gpu-py310-cu118-ubuntu20.04-pr-2993-2023-05-13-17-25-48/lib/python3.8/site-packages/sagemaker/workflow/pipeline_context.py:272: in wrapper
return run_func(*args, **kwargs)
2.0.0-transformers4.29.1-gpu-py310-cu118-ubuntu20.04-pr-2993-2023-05-13-17-25-48/lib/python3.8/site-packages/sagemaker/estimator.py:1156: in fit
self.latest_training_job.wait(logs=logs)
2.0.0-transformers4.29.1-gpu-py310-cu118-ubuntu20.04-pr-2993-2023-05-13-17-25-48/lib/python3.8/site-packages/sagemaker/estimator.py:2297: in wait
self.sagemaker_session.logs_for_job(self.job_name, wait=True, log_type=logs)
2.0.0-transformers4.29.1-gpu-py310-cu118-ubuntu20.04-pr-2993-2023-05-13-17-25-48/lib/python3.8/site-packages/sagemaker/session.py:4216: in logs_for_job
self._check_job_status(job_name, description, "TrainingJobStatus")
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <sagemaker.session.Session object at 0x7f466ef3fc40>
job = 'test-hf-pt-qa-smmp-1684001017-f81e'
desc = {'AlgorithmSpecification': {'EnableSageMakerMetricsTimeSeries': True, 'TrainingImage': '669063966089.dkr.ecr.us-west-2...)), 'DebugHookConfig': {'CollectionConfigurations': [], 'S3OutputPath': 's3://sagemaker-us-west-2-669063966089/'}, ...}
status_key_name = 'TrainingJobStatus'
```
def _check_job_status(self, job, desc, status_key_name):
"""Check to see if the job completed successfully.
If not, construct and raise a exceptions. (UnexpectedStatusException).
Args:
job (str): The name of the job to check.
desc (dict[str, str]): The result of ``describe_training_job()``.
status_key_name (str): Status key name to check for.
Raises:
exceptions.CapacityError: If the training job fails with CapacityError.
exceptions.UnexpectedStatusException: If the training job fails.
"""
status = desc[status_key_name]
# If the status is capital case, then convert it to Camel case
status = _STATUS_CODE_TABLE.get(status, status)
if status == "Stopped":
LOGGER.warning(
"Job ended with status 'Stopped' rather than 'Completed'. "
"This could mean the job timed out or stopped early for some other reason: "
"Consider checking whether it completed as you expect."
)
elif status != "Completed":
reason = desc.get("FailureReason", "(No reason provided)")
job_type = status_key_name.replace("JobStatus", " job")
message = "Error for {job_type} {job_name}: {status}. Reason: {reason}".format(
job_type=job_type, job_name=job, status=status, reason=reason
)
if "CapacityError" in str(reason):
raise exceptions.CapacityError(
message=message,
allowed_statuses=["Completed", "Stopped"],
actual_status=status,
)
> raise exceptions.UnexpectedStatusException(
message=message,
allowed_statuses=["Completed", "Stopped"],
actual_status=status,
)
E sagemaker.exceptions.UnexpectedStatusException: Error for Training job test-hf-pt-qa-smmp-1684001017-f81e: Failed. Reason: AlgorithmError: ExecuteUserScriptError:
E ExitCode 1
E ErrorMessage "AttributeError: 'TrainingArguments' object has no attribute 'distributed_state'
E โญโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโฎ
E โ /opt/conda/lib/python3.10/runpy.py:196 in _run_module_as_main โ
E โ โ
E โ 193 โ main_globals = sys.modules["__main__"].__dict__ โ
E โ 194 โ if alter_argv: โ
E โ 195 โ โ sys.argv[0] = mod_spec.origin โ
E โ โฑ 196 โ return _run_code(code, main_globals, None, โ
E โ 197 โ โ โ โ โ "__main__", mod_spec) โ
E โ 198 โ
E โ 199 de, exit code: 1
2.0.0-transformers4.29.1-gpu-py310-cu118-ubuntu20.04-pr-2993-2023-05-13-17-25-48/lib/python3.8/site-packages/sagemaker/session.py:3749: UnexpectedStatusException
```
</details>
### Expected behavior
Find either what to adapt from the sagemaker side or from our side to make sure distributed features work, so that we would be update transformers to higher versions the next time.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23390/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23390/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23389
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23389/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23389/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23389/events
|
https://github.com/huggingface/transformers/pull/23389
| 1,711,605,945 |
PR_kwDOCUB6oc5Ql_LZ
| 23,389 |
Fix RoBERTa vocab size
|
{
"login": "amariucaitheodor",
"id": 32778667,
"node_id": "MDQ6VXNlcjMyNzc4NjY3",
"avatar_url": "https://avatars.githubusercontent.com/u/32778667?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amariucaitheodor",
"html_url": "https://github.com/amariucaitheodor",
"followers_url": "https://api.github.com/users/amariucaitheodor/followers",
"following_url": "https://api.github.com/users/amariucaitheodor/following{/other_user}",
"gists_url": "https://api.github.com/users/amariucaitheodor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amariucaitheodor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amariucaitheodor/subscriptions",
"organizations_url": "https://api.github.com/users/amariucaitheodor/orgs",
"repos_url": "https://api.github.com/users/amariucaitheodor/repos",
"events_url": "https://api.github.com/users/amariucaitheodor/events{/privacy}",
"received_events_url": "https://api.github.com/users/amariucaitheodor/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23389). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,689 | 1,689 |
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #23388
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23389/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23389/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23389",
"html_url": "https://github.com/huggingface/transformers/pull/23389",
"diff_url": "https://github.com/huggingface/transformers/pull/23389.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23389.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23388
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23388/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23388/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23388/events
|
https://github.com/huggingface/transformers/issues/23388
| 1,711,603,152 |
I_kwDOCUB6oc5mBP3Q
| 23,388 |
Wrong RoBERTa configuration
|
{
"login": "amariucaitheodor",
"id": 32778667,
"node_id": "MDQ6VXNlcjMyNzc4NjY3",
"avatar_url": "https://avatars.githubusercontent.com/u/32778667?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amariucaitheodor",
"html_url": "https://github.com/amariucaitheodor",
"followers_url": "https://api.github.com/users/amariucaitheodor/followers",
"following_url": "https://api.github.com/users/amariucaitheodor/following{/other_user}",
"gists_url": "https://api.github.com/users/amariucaitheodor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amariucaitheodor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amariucaitheodor/subscriptions",
"organizations_url": "https://api.github.com/users/amariucaitheodor/orgs",
"repos_url": "https://api.github.com/users/amariucaitheodor/repos",
"events_url": "https://api.github.com/users/amariucaitheodor/events{/privacy}",
"received_events_url": "https://api.github.com/users/amariucaitheodor/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @ArthurZucker ",
"Thanks for pointing this out and opening a PR! ",
"Hey @ArthurZucker @amyeroberts , as i still see that #23389 is not merged , so can I fix this and create a new PR? ",
"Sure! Would be great if you can checkout his branch to include the work he has done ๐ ",
"fix #23863 "
] | 1,684 | 1,685 | 1,685 |
NONE
| null |
https://github.com/huggingface/transformers/blob/c2393cad085e3875ee2206d917d46d15e50602a3/src/transformers/models/roberta/configuration_roberta.py#L108
Must match the tokenizer vocab. size, i.e., `50265`.
Others also mentioned this: https://discuss.pytorch.org/t/hugging-faces-roberta-config-and-tokenizer-do-not-have-matching-vocabulary/134868
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23388/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23388/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23387
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23387/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23387/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23387/events
|
https://github.com/huggingface/transformers/pull/23387
| 1,711,369,145 |
PR_kwDOCUB6oc5QlL1K
| 23,387 |
Run doctest (in PRs) only when some doc example(s) are modified
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,692 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
Run doctest (in PRs) only when some doc example(s) are modified.
This is a fix for #23327 (which is reverted in #23371 due to the wrong logic) .
This PR implements the correct logic for
> for now the tests are launched on a file if we modify it, but I would only launch it if docstrings are modified (e.g. check the modifications are correct) to go faster.
where I go one step further to make it
> only launch it if some _**doc examples**_ are modified`
(instead of any docstring)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23387/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23387",
"html_url": "https://github.com/huggingface/transformers/pull/23387",
"diff_url": "https://github.com/huggingface/transformers/pull/23387.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23387.patch",
"merged_at": 1684272543000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23386
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23386/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23386/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23386/events
|
https://github.com/huggingface/transformers/issues/23386
| 1,711,340,847 |
I_kwDOCUB6oc5mAP0v
| 23,386 |
FSDP cuda out of memory during checkpoint saving
|
{
"login": "li-plus",
"id": 39846316,
"node_id": "MDQ6VXNlcjM5ODQ2MzE2",
"avatar_url": "https://avatars.githubusercontent.com/u/39846316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/li-plus",
"html_url": "https://github.com/li-plus",
"followers_url": "https://api.github.com/users/li-plus/followers",
"following_url": "https://api.github.com/users/li-plus/following{/other_user}",
"gists_url": "https://api.github.com/users/li-plus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/li-plus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/li-plus/subscriptions",
"organizations_url": "https://api.github.com/users/li-plus/orgs",
"repos_url": "https://api.github.com/users/li-plus/repos",
"events_url": "https://api.github.com/users/li-plus/events{/privacy}",
"received_events_url": "https://api.github.com/users/li-plus/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Not sure why you tagged myself or Sourab - we have worked on Deepspeed integration, and you're welcome to ask questions if you use that. I personally don't know anything about FSDP - Deepspeed works perfectly well and FSDP implements the same ZeRO protocol that Deepspeed innovated. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.30.0.dev0
- Platform: Linux-4.14.81.bm.22-amd64-x86_64-with-glibc2.24
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help?
@stas00 @pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm running https://github.com/tatsu-lab/stanford_alpaca/blob/main/train.py on 32GB GPUs to finetune LLaMA-7B with FSDP turned on. The training process goes well but I get cuda oom when saving the final checkpoint. The entire fp32 state dict cannot fit in my gpu memory. A possible solution is to offload state dict to cpu mentioned in https://github.com/pytorch/pytorch/issues/98823? Is there any better way to handle this?
```
/data00/home/lijiahao.plus/miniconda3/envs/mlir/lib/python3.9/site-packages/torch/distributed/fsdp/_state_dict_utils.py:312: UserWarning: Failed to clone() tensor with name _fsdp_wrapped_module.model.layers.31.mlp.gate_proj.weight on rank 2. This may mean that this state_dict entry could point to invalid memory regions after returning from state_dict() call if this parameter is managed by FSDP. Please check clone implementation of _fsdp_wrapped_module.model.layers.31.mlp.gate_proj.weight. Error: CUDA out of memory. Tried to allocate 172.00 MiB (GPU 2; 31.75 GiB total capacity; 29.54 GiB already allocated; 39.75 MiB free; 30.38 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```
### Expected behavior
Not oom on saving checkpoints.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23386/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23386/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23385
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23385/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23385/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23385/events
|
https://github.com/huggingface/transformers/issues/23385
| 1,711,303,242 |
I_kwDOCUB6oc5mAGpK
| 23,385 |
NLLB-MoE 54B multi-GPU inference throws "Expected all tensors to be on the same device" error
|
{
"login": "liyier90",
"id": 56420072,
"node_id": "MDQ6VXNlcjU2NDIwMDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/56420072?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liyier90",
"html_url": "https://github.com/liyier90",
"followers_url": "https://api.github.com/users/liyier90/followers",
"following_url": "https://api.github.com/users/liyier90/following{/other_user}",
"gists_url": "https://api.github.com/users/liyier90/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liyier90/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liyier90/subscriptions",
"organizations_url": "https://api.github.com/users/liyier90/orgs",
"repos_url": "https://api.github.com/users/liyier90/repos",
"events_url": "https://api.github.com/users/liyier90/events{/privacy}",
"received_events_url": "https://api.github.com/users/liyier90/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @younesbelkada ",
"Hi @liyier90 \r\nThanks! Sounds like the `_no_split_modules` was not properly checked , I think the fix should be to replace the current `_no_split_modules` with the ones you have defined. \r\nIs this block :\r\n```python\r\n# Demonstrate that only \"model.encoder.layer_norm\" and \"model.encoder.embed_tokens\"\r\n # needs to be on the same device as the input\r\n for module, device in device_map.items():\r\n if module in {\"model.encoder.layer_norm\", \"model.encoder.embed_tokens\"}:\r\n if device != 0:\r\n device_map[module] = 0\r\n else:\r\n if device == 0:\r\n device_map[module] = 1\r\n```\r\nnecessary? I think `accelerate` automatically takes care of setting the input to the correct device through hooks.\r\nWhat happens if you remove it in your case and just use the correct `_no_split_modules`?",
"If I comment out that block, I get the following error:\r\n```\r\nโญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ\r\nโ <path>/code/nscc_working/engr/multi_node/nllb_inference/correct_infer.py:66 โ\r\nโ in <module> โ\r\nโ โ\r\nโ 63 โ\r\nโ 64 โ\r\nโ 65 if __name__ == \"__main__\": โ\r\nโ โฑ 66 โ main() โ\r\nโ 67 โ\r\nโ โ\r\nโ <path>/code/nscc_working/engr/multi_node/nllb_inference/correct_infer.py:58 โ\r\nโ in main โ\r\nโ โ\r\nโ 55 โ โ if torch.is_tensor(inputs[i]): โ\r\nโ 56 โ โ โ inputs[i] = inputs[i].to(\"cuda:0\") โ\r\nโ 57 โ โ\r\nโ โฑ 58 โ translated_tokens = model.generate( โ\r\nโ 59 โ โ **inputs, forced_bos_token_id=tokenizer.lang_code_to_id[\"fra_Latn\"] โ\r\nโ 60 โ ) โ\r\nโ 61 โ outputs = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True) โ\r\nโ โ\r\nโ <path>/.conda/envs/megatron/lib/python3.8/site-packages/torch/utils/_contextl โ\r\nโ ib.py:115 in decorate_context โ\r\nโ โ\r\nโ 112 โ @functools.wraps(func) โ\r\nโ 113 โ def decorate_context(*args, **kwargs): โ\r\nโ 114 โ โ with ctx_factory(): โ\r\nโ โฑ 115 โ โ โ return func(*args, **kwargs) โ\r\nโ 116 โ โ\r\nโ 117 โ return decorate_context โ\r\nโ 118 โ\r\nโ โ\r\nโ <path>/.conda/envs/megatron/lib/python3.8/site-packages/transformers/generati โ\r\nโ on/utils.py:1437 in generate โ\r\nโ โ\r\nโ 1434 โ โ โ โ ) โ\r\nโ 1435 โ โ โ โ\r\nโ 1436 โ โ โ # 11. run greedy search โ\r\nโ โฑ 1437 โ โ โ return self.greedy_search( โ\r\nโ 1438 โ โ โ โ input_ids, โ\r\nโ 1439 โ โ โ โ logits_processor=logits_processor, โ\r\nโ 1440 โ โ โ โ stopping_criteria=stopping_criteria, โ\r\nโ โ\r\nโ <path>/.conda/envs/megatron/lib/python3.8/site-packages/transformers/generati โ\r\nโ on/utils.py:2288 in greedy_search โ\r\nโ โ\r\nโ 2285 โ โ โ if eos_token_id is not None: โ\r\nโ 2286 โ โ โ โ if pad_token_id is None: โ\r\nโ 2287 โ โ โ โ โ raise ValueError(\"If `eos_token_id` is defined, make sure that ` โ\r\nโ โฑ 2288 โ โ โ โ next_tokens = next_tokens * unfinished_sequences + pad_token_id * (1 โ\r\nโ 2289 โ โ โ โ\r\nโ 2290 โ โ โ # update generated ids, model inputs, and length for next step โ\r\nโ 2291 โ โ โ input_ids = torch.cat([input_ids, next_tokens[:, None]], dim=-1) โ\r\nโฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ\r\nRuntimeError: Expected all tensors to be on the same device, but found at least two devices,\r\ncuda:1 and cuda:0!\r\n```\r\n\r\nBecause `model.encoder.layer_norm` got put on device 1:\r\n```\r\n{'lm_head': 0,\r\n 'model.decoder.embed_positions': 1,\r\n 'model.decoder.embed_tokens': 1,\r\n 'model.decoder.layer_norm': 2,\r\n 'model.decoder.layers.0': 1,\r\n 'model.decoder.layers.1': 1,\r\n 'model.decoder.layers.10': 2,\r\n 'model.decoder.layers.11': 2,\r\n 'model.decoder.layers.12': 2,\r\n 'model.decoder.layers.13': 2,\r\n 'model.decoder.layers.14': 2,\r\n 'model.decoder.layers.15': 2,\r\n 'model.decoder.layers.16': 2,\r\n 'model.decoder.layers.17': 2,\r\n 'model.decoder.layers.18': 2,\r\n 'model.decoder.layers.19': 2,\r\n 'model.decoder.layers.2': 1,\r\n 'model.decoder.layers.20': 2,\r\n 'model.decoder.layers.21': 2,\r\n 'model.decoder.layers.22': 2,\r\n 'model.decoder.layers.23': 2,\r\n 'model.decoder.layers.3': 1,\r\n 'model.decoder.layers.4': 1,\r\n 'model.decoder.layers.5': 1,\r\n 'model.decoder.layers.6': 1,\r\n 'model.decoder.layers.7': 2,\r\n 'model.decoder.layers.8': 2,\r\n 'model.decoder.layers.9': 2,\r\n 'model.encoder.embed_positions': 0,\r\n 'model.encoder.embed_tokens': 0,\r\n 'model.encoder.layer_norm': 1,\r\n 'model.encoder.layers.0': 0,\r\n 'model.encoder.layers.1': 0,\r\n 'model.encoder.layers.10': 1,\r\n 'model.encoder.layers.11': 1,\r\n 'model.encoder.layers.12': 1,\r\n 'model.encoder.layers.13': 1,\r\n 'model.encoder.layers.14': 1,\r\n 'model.encoder.layers.15': 1,\r\n 'model.encoder.layers.16': 1,\r\n 'model.encoder.layers.17': 1,\r\n 'model.encoder.layers.18': 1,\r\n 'model.encoder.layers.19': 1,\r\n 'model.encoder.layers.2': 0,\r\n 'model.encoder.layers.20': 1,\r\n 'model.encoder.layers.21': 1,\r\n 'model.encoder.layers.22': 1,\r\n 'model.encoder.layers.23': 1,\r\n 'model.encoder.layers.3': 1,\r\n 'model.encoder.layers.4': 1,\r\n 'model.encoder.layers.5': 1,\r\n 'model.encoder.layers.6': 1,\r\n 'model.encoder.layers.7': 1,\r\n 'model.encoder.layers.8': 1,\r\n 'model.encoder.layers.9': 1,\r\n 'model.shared': 0}\r\n```\r\n\r\nIt could be because I'm moving all inputs to device 0, but if I were to remove the \r\n```\r\n for i in inputs:\r\n if torch.is_tensor(inputs[i]):\r\n inputs[i] = inputs[i].to(\"cuda:0\")\r\n```\r\nblock. I get\r\n```\r\nRuntimeError: Expected all tensors to be on the same device, but found at least two devices,\r\ncuda:1 and cpu!\r\n```\r\n",
"~Hey thanks for reporting! From the look of it, it seems like this is an `accelerate` issue rather than a transformer issue. (as accelerate should be moving the layers to the correct device on its own, and no_split modules does not support individual layers to be on the same module). Could you open an issue over there? ๐~\r\nedit: I got confused by the only 2 layers that you had to put on another device, @younesbelkada explained offline what he think should fix it! ",
"I don't see where the error in Accelerate lies. No layers that is not supposed to be split has been split. So the issue is definitely a Transformers one.",
"Yeah I think it is definitely something that has to do with no split modules not correctly set. Having a look now",
"@liyier90 \r\nI made https://github.com/huggingface/transformers/pull/23758 that should fix your issue.\r\nAlso make sure to put the input ids on the same device as your lm head. Otherwise you will get device mismatch issues in `generate`.\r\nThe snippet I used is the one below, on a 2xNVIDIA A100 80GB: \r\n```python\r\nimport torch\r\nfrom transformers import AutoModelForSeq2SeqLM, AutoTokenizer\r\n\r\nmodel_name = \"facebook/nllb-moe-54b\"\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(\r\n model_name,\r\n torch_dtype=torch.float16,\r\n device_map=\"auto\",\r\n load_in_8bit=True,\r\n)\r\n\r\nbatched_input = [\r\n 'We now have 4-month-old mice that are non-diabetic that used to be diabetic,\" he added.',\r\n \"Dr. Ehud Ur, professor of medicine at Dalhousie University in Halifax, Nova Scotia and chair of the clinical and scientific division of the Canadian Diabetes Association cautioned that the research is still in its early days.\"\r\n \"Like some other experts, he is skeptical about whether diabetes can be cured, noting that these findings have no relevance to people who already have Type 1 diabetes.\"\r\n \"On Monday, Sara Danius, permanent secretary of the Nobel Committee for Literature at the Swedish Academy, publicly announced during a radio program on Sveriges Radio in Sweden the committee, unable to reach Bob Dylan directly about winning the 2016 Nobel Prize in Literature, had abandoned its efforts to reach him.\",\r\n 'Danius said, \"Right now we are doing nothing. I have called and sent emails to his closest collaborator and received very friendly replies. For now, that is certainly enough.\"',\r\n \"Previously, Ring's CEO, Jamie Siminoff, remarked the company started when his doorbell wasn't audible from his shop in his garage.\",\r\n]\r\ninputs = tokenizer(batched_input, return_tensors=\"pt\", padding=True).to(1)\r\n\r\ntranslated_tokens = model.generate(\r\n **inputs, forced_bos_token_id=tokenizer.lang_code_to_id[\"fra_Latn\"]\r\n)\r\noutputs = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)\r\nprint(outputs)\r\n```\r\nI had to assign the input to the device 1 because in my case the lm head was on the device 1. But you can retrieve it with \r\n\r\n```python\r\nlm_head_device = model.hf_device_map[\"lm_head\"]\r\n```\r\n\r\nAnd the result I get is:\r\n\r\n\r\n```bash\r\n['Nous avons maintenant des souris de 4 mois qui ne sont pas diabรฉtiques mais qui l\\'รฉtaient\", a-t-il ajoutรฉ.', \"Le Dr Ehud Ur, professeur de mรฉdecine ร l'Universitรฉ Dalhousie ร Halifax, en Nouvelle-รcosse, et prรฉsident de la division clinique et scientifique de l'Association canadienne du diabรจte, a averti que la recherche en รฉtait encore ร ses dรฉbuts. Comme d'autres experts, il est sceptique quant ร la possibilitรฉ de guรฉrir le diabรจte, notant que ces rรฉsultats n'ont aucune pertinence pour les personnes atteintes de diabรจte de type 1.\", 'Danius a dรฉclarรฉ: \"Pour le moment, nous ne faisons rien. J\\'ai appelรฉ et envoyรฉ des courriels ร son plus proche collaborateur et j\\'ai reรงu des rรฉponses trรจs amicales. Pour l\\'instant, c\\'est certainement suffisant\".', \"Auparavant, le PDG de Ring, Jamie Siminoff, a dรฉclarรฉ que la sociรฉtรฉ avait commencรฉ lorsque sa sonnette n'รฉtait pas audible depuis son magasin dans son garage.\"]\r\n```\r\n\r\n",
"@younesbelkada \r\n\r\nUnfortunately, I don't think changes in the PR was sufficient to resolve the error.\r\n\r\nI updated `transformers` to include the fix using\r\n```\r\npip install git+https://github.com/huggingface/transformers\r\n```\r\nThe latest commit on the `main` branch was https://github.com/huggingface/transformers/commit/f67dac97bdc63874f2288546b3fa87e69d2ea1c8.\r\n\r\nI ran code snippet you provided but on 4 x A100 40GB as I do not have access to 80 GB cards. I made the modification to move the input to the same device as `lm_head` based on your advice.\r\n\r\n```python\r\nimport os \r\n \r\nimport torch \r\nfrom transformers import AutoModelForSeq2SeqLM, AutoTokenizer \r\n \r\nmodel_name = \"facebook/nllb-moe-54b\" \r\ncache_dir = <path>\r\n \r\ntokenizer = AutoTokenizer.from_pretrained(model_name, cache_dir=cache_dir) \r\nmodel = AutoModelForSeq2SeqLM.from_pretrained( \r\n model_name, \r\n torch_dtype=torch.float16, \r\n device_map=\"auto\", \r\n load_in_8bit=True, \r\n cache_dir=cache_dir, \r\n) \r\n \r\nbatched_input = [ \r\n 'We now have 4-month-old mice that are non-diabetic that used to be diabetic,\" he added.',\r\n \"Dr. Ehud Ur, professor of medicine at Dalhousie University in Halifax, Nova Scotia and chair of the clinical and scientific division of the Canadian Diabetes Association cautioned that the research is still in its early days.\"\r\n \"Like some other experts, he is skeptical about whether diabetes can be cured, noting that these findings have no relevance to people who already have Type 1 diabetes.\"\r\n \"On Monday, Sara Danius, permanent secretary of the Nobel Committee for Literature at the Swedish Academy, publicly announced during a radio program on Sveriges Radio in Sweden the committee, unable to reach Bob Dylan directly about winning the 2016 Nobel Prize in Literature, had abandoned its efforts to reach him.\",\r\n 'Danius said, \"Right now we are doing nothing. I have called and sent emails to his closest collaborator and received very friendly replies. For now, that is certainly enough.\"',\r\n \"Previously, Ring's CEO, Jamie Siminoff, remarked the company started when his doorbell wasn't audible from his shop in his garage.\",\r\n] \r\ninputs = tokenizer(batched_input, return_tensors=\"pt\", padding=True).to( \r\n model.hf_device_map[\"lm_head\"] \r\n) \r\n \r\ntranslated_tokens = model.generate( \r\n **inputs, forced_bos_token_id=tokenizer.lang_code_to_id[\"fra_Latn\"] \r\n) \r\noutputs = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True) \r\nprint(outputs) \r\n```\r\n\r\nBut I am still getting an \"Expected all tensors to be on the same device\" error.\r\n\r\n```\r\nโญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ\r\nโ /home/users/nus/yier/code/nscc_working/engr/multi_node/nllb_inference/sample_infer.py:31 in โ\r\nโ <module> โ\r\nโ โ\r\nโ 28 โ model.hf_device_map[\"lm_head\"] โ\r\nโ 29 ) โ\r\nโ 30 โ\r\nโ โฑ 31 translated_tokens = model.generate( โ\r\nโ 32 โ **inputs, forced_bos_token_id=tokenizer.lang_code_to_id[\"fra_Latn\"] โ\r\nโ 33 ) โ\r\nโ 34 outputs = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True) โ\r\nโ โ\r\nโ /home/users/nus/yier/.conda/envs/megatron/lib/python3.8/site-packages/torch/utils/_contextlib.py โ\r\nโ :115 in decorate_context โ\r\nโ โ\r\nโ 112 โ @functools.wraps(func) โ\r\nโ 113 โ def decorate_context(*args, **kwargs): โ\r\nโ 114 โ โ with ctx_factory(): โ\r\nโ โฑ 115 โ โ โ return func(*args, **kwargs) โ\r\nโ 116 โ โ\r\nโ 117 โ return decorate_context โ\r\nโ 118 โ\r\nโ โ\r\nโ /home/users/nus/yier/.conda/envs/megatron/lib/python3.8/site-packages/transformers/generation/ut โ\r\nโ ils.py:1518 in generate โ\r\nโ โ\r\nโ 1515 โ โ โ โ ) โ\r\nโ 1516 โ โ โ โ\r\nโ 1517 โ โ โ # 11. run greedy search โ\r\nโ โฑ 1518 โ โ โ return self.greedy_search( โ\r\nโ 1519 โ โ โ โ input_ids, โ\r\nโ 1520 โ โ โ โ logits_processor=logits_processor, โ\r\nโ 1521 โ โ โ โ stopping_criteria=stopping_criteria, โ\r\nโ โ\r\nโ /home/users/nus/yier/.conda/envs/megatron/lib/python3.8/site-packages/transformers/generation/ut โ\r\nโ ils.py:2375 in greedy_search โ\r\nโ โ\r\nโ 2372 โ โ โ if eos_token_id is not None: โ\r\nโ 2373 โ โ โ โ if pad_token_id is None: โ\r\nโ 2374 โ โ โ โ โ raise ValueError(\"If `eos_token_id` is defined, make sure that `pad_ โ\r\nโ โฑ 2375 โ โ โ โ next_tokens = next_tokens * unfinished_sequences + pad_token_id * (1 - u โ\r\nโ 2376 โ โ โ โ\r\nโ 2377 โ โ โ # update generated ids, model inputs, and length for next step โ\r\nโ 2378 โ โ โ input_ids = torch.cat([input_ids, next_tokens[:, None]], dim=-1) โ\r\nโฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ\r\nRuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:2 and cuda:0!\r\n```\r\n\r\nI notice that one of the layers I moved in my earlier snippets (`model.encoder.layer_norm`) was on `cuda:2`.\r\n```\r\n{'lm_head': 0,\r\n 'model.decoder.embed_positions': 2,\r\n 'model.decoder.embed_tokens': 2,\r\n 'model.decoder.layer_norm': 3,\r\n 'model.decoder.layers.0': 2,\r\n 'model.decoder.layers.1': 2,\r\n 'model.decoder.layers.10': 3,\r\n 'model.decoder.layers.11': 3,\r\n 'model.decoder.layers.12': 3,\r\n 'model.decoder.layers.13': 3,\r\n 'model.decoder.layers.14': 3,\r\n 'model.decoder.layers.15': 3,\r\n 'model.decoder.layers.16': 3,\r\n 'model.decoder.layers.17': 3,\r\n 'model.decoder.layers.18': 3,\r\n 'model.decoder.layers.19': 3,\r\n 'model.decoder.layers.2': 2,\r\n 'model.decoder.layers.20': 3,\r\n 'model.decoder.layers.21': 3,\r\n 'model.decoder.layers.22': 3,\r\n 'model.decoder.layers.23': 3,\r\n 'model.decoder.layers.3': 2,\r\n 'model.decoder.layers.4': 2,\r\n 'model.decoder.layers.5': 2,\r\n 'model.decoder.layers.6': 2,\r\n 'model.decoder.layers.7': 3,\r\n 'model.decoder.layers.8': 3,\r\n 'model.decoder.layers.9': 3,\r\n 'model.encoder.embed_positions': 0,\r\n 'model.encoder.embed_tokens': 0,\r\n 'model.encoder.layer_norm': 2,\r\n 'model.encoder.layers.0': 0,\r\n 'model.encoder.layers.1': 0,\r\n 'model.encoder.layers.10': 1,\r\n 'model.encoder.layers.11': 1,\r\n 'model.encoder.layers.12': 1,\r\n 'model.encoder.layers.13': 1,\r\n 'model.encoder.layers.14': 1,\r\n 'model.encoder.layers.15': 1,\r\n 'model.encoder.layers.16': 1,\r\n 'model.encoder.layers.17': 1,\r\n 'model.encoder.layers.18': 1,\r\n 'model.encoder.layers.19': 2,\r\n 'model.encoder.layers.2': 0,\r\n 'model.encoder.layers.20': 2,\r\n 'model.encoder.layers.21': 2,\r\n 'model.encoder.layers.22': 2,\r\n 'model.encoder.layers.23': 2,\r\n 'model.encoder.layers.3': 0,\r\n 'model.encoder.layers.4': 0,\r\n 'model.encoder.layers.5': 0,\r\n 'model.encoder.layers.6': 0,\r\n 'model.encoder.layers.7': 1,\r\n 'model.encoder.layers.8': 1,\r\n 'model.encoder.layers.9': 1,\r\n 'model.shared': 0}\r\n```\r\n\r\nThe code ran successfully after I moved `model.encoder.layer_norm` to `cuda:0` while keeping the other device mapping untouched.\r\n\r\nPlease let me know if I made any mistakes in trying out your solution or if I should be raising this in the Accelerate repo instead. Thanks!",
"I am having the same issues. I installed transformers after the fix and I get ```RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0!```\r\n\r\nUnfortunately I only have 3 A100 40gb gpus that I can use. \r\n```\r\nfrom transformers import AutoModelForSeq2SeqLM, AutoTokenizer\r\nimport torch\r\n\r\nmodel_name = \"nllb_image/nllb-moe-54b\"\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(model_name,\r\n torch_dtype=torch.float16,\r\n device_map = 'auto',\r\n load_in_8bit=True,)\r\ninputs = tokenizer(\"test\", return_tensors=\"pt\").to(model.hf_device_map[\"lm_head\"])\r\ntranslated_tokens = model.generate(\r\n **inputs, forced_bos_token_id=tokenizer.lang_code_to_id[\"fr_Latn\"], max_length=512\r\n)\r\ndecoded_sentence = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]\r\nreturn decoded_sentence\r\n```\r\nexpected result: translated \"test\" (french)\r\n\r\nactual result: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0!\r\n\r\nAm I doing anything wrong?\r\n\r\n```\r\n{\r\n \"model.shared\":0,\r\n \"lm_head\":0,\r\n \"model.encoder.embed_tokens\":0,\r\n \"model.encoder.embed_positions\":0,\r\n \"model.encoder.layers.0\":0,\r\n \"model.encoder.layers.1\":0,\r\n \"model.encoder.layers.2\":0,\r\n \"model.encoder.layers.3\":0,\r\n \"model.encoder.layers.4\":0,\r\n \"model.encoder.layers.5\":0,\r\n \"model.encoder.layers.6\":0,\r\n \"model.encoder.layers.7\":0,\r\n \"model.encoder.layers.8\":0,\r\n \"model.encoder.layers.9\":0,\r\n \"model.encoder.layers.10\":0,\r\n \"model.encoder.layers.11\":0,\r\n \"model.encoder.layers.12\":0,\r\n \"model.encoder.layers.13\":0,\r\n \"model.encoder.layers.14\":0,\r\n \"model.encoder.layers.15.self_attn\":0,\r\n \"model.encoder.layers.15.attn_dropout\":0,\r\n \"model.encoder.layers.15.self_attn_layer_norm\":0,\r\n \"model.encoder.layers.15.ffn.router\":0,\r\n \"model.encoder.layers.15.ffn.token_dropout\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_0\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_1\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_2\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_3\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_4\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_5\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_6\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_7\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_8\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_9\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_10\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_11\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_12\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_13\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_14\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_15\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_16\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_17\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_18\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_19\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_20\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_21\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_22\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_23\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_24\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_25\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_26\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_27\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_28\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_29\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_30\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_31\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_32\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_33\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_34\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_35\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_36\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_37\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_38\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_39\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_40\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_41\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_42\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_43\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_44\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_45\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_46\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_47\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_48\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_49\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_50\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_51\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_52\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_53\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_54\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_55\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_56\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_57\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_58\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_59\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_60\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_61\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_62\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_63\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_64\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_65\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_66\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_67\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_68\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_69\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_70\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_71\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_72\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_73\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_74\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_75\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_76\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_77\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_78\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_79\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_80\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_81\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_82\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_83\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_84\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_85\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_86\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_87\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_88\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_89\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_90\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_91\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_92\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_93\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_94\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_95\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_96\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_97\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_98\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_99\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_100\":0,\r\n \"model.encoder.layers.15.ffn.experts.expert_102\":1,\r\n \"model.encoder.layers.15.ffn.experts.expert_103\":1,\r\n \"model.encoder.layers.15.ffn.experts.expert_104\":1,\r\n \"model.encoder.layers.15.ffn.experts.expert_105\":1,\r\n \"model.encoder.layers.15.ffn.experts.expert_106\":1,\r\n \"model.encoder.layers.15.ffn.experts.expert_107\":1,\r\n \"model.encoder.layers.15.ffn.experts.expert_108\":1,\r\n \"model.encoder.layers.15.ffn.experts.expert_109\":1,\r\n \"model.encoder.layers.15.ffn.experts.expert_110\":1,\r\n \"model.encoder.layers.15.ffn.experts.expert_111\":1,\r\n \"model.encoder.layers.15.ffn.experts.expert_112\":1,\r\n \"model.encoder.layers.15.ffn.experts.expert_113\":1,\r\n \"model.encoder.layers.15.ffn.experts.expert_114\":1,\r\n \"model.encoder.layers.15.ffn.experts.expert_115\":1,\r\n \"model.encoder.layers.15.ffn.experts.expert_116\":1,\r\n \"model.encoder.layers.15.ffn.experts.expert_117\":1,\r\n \"model.encoder.layers.15.ffn.experts.expert_118\":1,\r\n \"model.encoder.layers.15.ffn.experts.expert_119\":1,\r\n \"model.encoder.layers.15.ffn.experts.expert_120\":1,\r\n \"model.encoder.layers.15.ffn.experts.expert_121\":1,\r\n \"model.encoder.layers.15.ffn.experts.expert_122\":1,\r\n \"model.encoder.layers.15.ffn.experts.expert_123\":1,\r\n \"model.encoder.layers.15.ffn.experts.expert_124\":1,\r\n \"model.encoder.layers.15.ffn.experts.expert_125\":1,\r\n \"model.encoder.layers.15.ffn.experts.expert_126\":1,\r\n \"model.encoder.layers.15.ffn.experts.expert_127\":1,\r\n \"model.encoder.layers.15.ff_layer_norm\":1,\r\n \"model.encoder.layers.15.ff_dropout\":1,\r\n \"model.encoder.layers.16\":1,\r\n \"model.encoder.layers.17\":1,\r\n \"model.encoder.layers.18\":1,\r\n \"model.encoder.layers.19\":1,\r\n \"model.encoder.layers.20\":1,\r\n \"model.encoder.layers.21\":1,\r\n \"model.encoder.layers.22\":1,\r\n \"model.encoder.layers.23\":1,\r\n \"model.encoder.layer_norm\":1,\r\n \"model.decoder.embed_tokens\":1,\r\n \"model.decoder.embed_positions\":1,\r\n \"model.decoder.layers.0\":1,\r\n \"model.decoder.layers.1\":1,\r\n \"model.decoder.layers.2\":1,\r\n \"model.decoder.layers.3\":1,\r\n \"model.decoder.layers.4\":1,\r\n \"model.decoder.layers.5\":1,\r\n \"model.decoder.layers.6\":1,\r\n \"model.decoder.layers.7.self_attn\":1,\r\n \"model.decoder.layers.7.activation_fn\":1,\r\n \"model.decoder.layers.7.attn_dropout\":1,\r\n \"model.decoder.layers.7.self_attn_layer_norm\":1,\r\n \"model.decoder.layers.7.cross_attention\":1,\r\n \"model.decoder.layers.7.cross_attention_layer_norm\":1,\r\n \"model.decoder.layers.7.ffn.router\":1,\r\n \"model.decoder.layers.7.ffn.token_dropout\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_0\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_1\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_2\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_3\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_4\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_5\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_6\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_7\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_8\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_9\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_10\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_11\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_12\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_13\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_14\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_15\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_16\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_17\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_18\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_19\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_20\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_21\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_22\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_23\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_24\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_25\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_26\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_27\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_28\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_29\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_30\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_31\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_32\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_33\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_34\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_35\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_36\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_37\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_38\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_39\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_40\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_41\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_42\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_43\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_44\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_45\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_46\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_47\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_48\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_49\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_50\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_51\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_52\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_53\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_54\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_55\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_56\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_57\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_58\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_59\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_60\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_61\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_62\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_63\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_64\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_65\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_66\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_67\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_68\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_69\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_70\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_71\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_72\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_73\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_74\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_75\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_76\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_77\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_78\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_79\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_80\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_81\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_82\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_83\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_84\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_85\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_86\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_87\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_88\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_89\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_90\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_91\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_92\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_93\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_94\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_95\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_96\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_97\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_98\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_99\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_100\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_101\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_102\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_103\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_104\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_105\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_106\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_107\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_108\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_109\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_110\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_111\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_112\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_113\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_114\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_115\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_116\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_118\":2,\r\n \"model.decoder.layers.7.ffn.experts.expert_119\":2,\r\n \"model.decoder.layers.7.ffn.experts.expert_120\":2,\r\n \"model.decoder.layers.7.ffn.experts.expert_121\":2,\r\n \"model.decoder.layers.7.ffn.experts.expert_122\":2,\r\n \"model.decoder.layers.7.ffn.experts.expert_123\":2,\r\n \"model.decoder.layers.7.ffn.experts.expert_124\":2,\r\n \"model.decoder.layers.7.ffn.experts.expert_125\":2,\r\n \"model.decoder.layers.7.ffn.experts.expert_126\":2,\r\n \"model.decoder.layers.7.ffn.experts.expert_127\":2,\r\n \"model.decoder.layers.7.ff_layer_norm\":2,\r\n \"model.decoder.layers.7.ff_dropout\":2,\r\n \"model.decoder.layers.8\":2,\r\n \"model.decoder.layers.9\":2,\r\n \"model.decoder.layers.10\":2,\r\n \"model.decoder.layers.11\":2,\r\n \"model.decoder.layers.12\":2,\r\n \"model.decoder.layers.13\":2,\r\n \"model.decoder.layers.14\":2,\r\n \"model.decoder.layers.15\":2,\r\n \"model.decoder.layers.16\":2,\r\n \"model.decoder.layers.17\":2,\r\n \"model.decoder.layers.18\":2,\r\n \"model.decoder.layers.19\":2,\r\n \"model.decoder.layers.20\":2,\r\n \"model.decoder.layers.21\":2,\r\n \"model.decoder.layers.22\":2,\r\n \"model.decoder.layers.23\":2,\r\n \"model.decoder.layer_norm\":2,\r\n \"model.encoder.layers.15.ffn.experts.expert_101\":1,\r\n \"model.decoder.layers.7.ffn.experts.expert_117\":2\r\n}\r\n```",
"The same issue here",
"cc @SunMarc ๐ ",
"Hi, I found the issue. In the meantime, the hack is to have the input on the same device as `model.encoder.layer_norm`. I will fix this in a PR asap. ",
"@SunMarc Could you please check [my problem](https://discuss.huggingface.co/t/multi-gpu-finetuning-of-nllb-produces-runtimeerror-expected-all-tensors-to-be-on-the-same-device-but-found-at-least-two-devices-cuda-1-and-cuda-0/52166)?\r\nI try multi-GPU finetuning of NLLB-200-1.3B.\r\nI tried your recent encoder hook #25735, but it didn't help me and \"Expected all tensors to be on the same device\" error takes place again.",
"Hi @molokanov50 .Please open a new issue as this is not linked to this issue which was about encoder decoder model in general, not specific to nllb model. Also, provide a minimal reproductible script so that I can try to reproduce the error on my side. For now the following script works as expected: \r\n```py\r\nfrom transformers import AutoTokenizer, AutoModelForSeq2SeqLM\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"facebook/nllb-200-distilled-1.3B\")\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(\"facebook/nllb-200-distilled-1.3B\", device_map=\"auto\")\r\ninput = 'We now have 4-month-old mice that are non-diabetic that used to be diabetic,\" he added.'\r\n\r\ninput = tokenizer(input, return_tensors=\"pt\")\r\n\r\ntranslated_tokens = model.generate(\r\n **input, forced_bos_token_id=tokenizer.lang_code_to_id[\"fra_Latn\"]\r\n)\r\nprint(tokenizer.decode(translated_tokens[0], skip_special_tokens=True))\r\n```"
] | 1,684 | 1,693 | 1,685 |
NONE
| null |
### System Info
- `transformers` version: 4.28.1
- Platform: Linux-4.18.0-305.25.1.el8_4.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.13.3
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0a0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: 4 x A100 40GB
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
*Note: there is a workaround/fix with manual device mapping attached below but I'm wondering if there could be an official fix for the bug.*
#### Code sample
infer.py (Mostly from the [HF Hub sample](https://huggingface.co/facebook/nllb-moe-54b) with some modifications to load with multi-GPU and quantization)
```python
import torch
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
def main():
model_name = "facebook/nllb-moe-54b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=True,
)
batched_input = [
'We now have 4-month-old mice that are non-diabetic that used to be diabetic," he added.',
"Dr. Ehud Ur, professor of medicine at Dalhousie University in Halifax, Nova Scotia and chair of the clinical and scientific division of the Canadian Diabetes Association cautioned that the research is still in its early days."
"Like some other experts, he is skeptical about whether diabetes can be cured, noting that these findings have no relevance to people who already have Type 1 diabetes."
"On Monday, Sara Danius, permanent secretary of the Nobel Committee for Literature at the Swedish Academy, publicly announced during a radio program on Sveriges Radio in Sweden the committee, unable to reach Bob Dylan directly about winning the 2016 Nobel Prize in Literature, had abandoned its efforts to reach him.",
'Danius said, "Right now we are doing nothing. I have called and sent emails to his closest collaborator and received very friendly replies. For now, that is certainly enough."',
"Previously, Ring's CEO, Jamie Siminoff, remarked the company started when his doorbell wasn't audible from his shop in his garage.",
]
inputs = tokenizer(batched_input, return_tensors="pt", padding=True)
translated_tokens = model.generate(
**inputs, forced_bos_token_id=tokenizer.lang_code_to_id["fra_Latn"]
)
outputs = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)
print(outputs)
if __name__ == "__main__":
main()
```
Steps:
1. Run `CUDA_VISIBLE_DEVICES=0,1,2,3 python infer.py`
2. See error
```
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ <path>/code/nscc_working/engr/multi_node/nllb_inference/error_infer.py:38 in โ
โ <module> โ
โ โ
โ 35 โ
โ 36 โ
โ 37 if __name__ == "__main__": โ
โ โฑ 38 โ main() โ
โ 39 โ
โ โ
โ <path>/code/nscc_working/engr/multi_node/nllb_inference/error_infer.py:30 in main โ
โ โ
โ 27 โ ] โ
โ 28 โ inputs = tokenizer(batched_input, return_tensors="pt", padding=True) โ
โ 29 โ โ
โ โฑ 30 โ translated_tokens = model.generate( โ
โ 31 โ โ **inputs, forced_bos_token_id=tokenizer.lang_code_to_id["fra_Latn"] โ
โ 32 โ ) โ
โ 33 โ outputs = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True) โ
โ โ
โ <path>/.conda/envs/megatron/lib/python3.8/site-packages/torch/utils/_contextlib.py โ
โ :115 in decorate_context โ
โ โ
โ 112 โ @functools.wraps(func) โ
โ 113 โ def decorate_context(*args, **kwargs): โ
โ 114 โ โ with ctx_factory(): โ
โ โฑ 115 โ โ โ return func(*args, **kwargs) โ
โ 116 โ โ
โ 117 โ return decorate_context โ
โ 118 โ
โ โ
โ <path>/.conda/envs/megatron/lib/python3.8/site-packages/transformers/generation/ut โ
โ ils.py:1286 in generate โ
โ โ
โ 1283 โ โ if self.config.is_encoder_decoder and "encoder_outputs" not in model_kwargs: โ
โ 1284 โ โ โ # if model is encoder decoder encoder_outputs are created โ
โ 1285 โ โ โ # and added to `model_kwargs` โ
โ โฑ 1286 โ โ โ model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation( โ
โ 1287 โ โ โ โ inputs_tensor, model_kwargs, model_input_name โ
โ 1288 โ โ โ ) โ
โ 1289 โ
โ โ
โ <path>/.conda/envs/megatron/lib/python3.8/site-packages/transformers/generation/ut โ
โ ils.py:638 in _prepare_encoder_decoder_kwargs_for_generation โ
โ โ
โ 635 โ โ model_input_name = model_input_name if model_input_name is not None else self.ma โ
โ 636 โ โ encoder_kwargs["return_dict"] = True โ
โ 637 โ โ encoder_kwargs[model_input_name] = inputs_tensor โ
โ โฑ 638 โ โ model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs) โ
โ 639 โ โ โ
โ 640 โ โ return model_kwargs โ
โ 641 โ
โ โ
โ <path>/.conda/envs/megatron/lib/python3.8/site-packages/torch/nn/modules/module.py โ
โ :1501 in _call_impl โ
โ โ
โ 1498 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ
โ 1499 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ
โ 1500 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ
โ โฑ 1501 โ โ โ return forward_call(*args, **kwargs) โ
โ 1502 โ โ # Do not call functions when jit is used โ
โ 1503 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ
โ 1504 โ โ backward_pre_hooks = [] โ
โ โ
โ <path>/.conda/envs/megatron/lib/python3.8/site-packages/transformers/models/nllb_m โ
โ oe/modeling_nllb_moe.py:1165 in forward โ
โ โ
โ 1162 โ โ โ โ โ โ (head_mask[idx] if head_mask is not None else None), โ
โ 1163 โ โ โ โ โ ) โ
โ 1164 โ โ โ โ else: โ
โ โฑ 1165 โ โ โ โ โ layer_outputs = encoder_layer( โ
โ 1166 โ โ โ โ โ โ hidden_states, โ
โ 1167 โ โ โ โ โ โ attention_mask, โ
โ 1168 โ โ โ โ โ โ layer_head_mask=(head_mask[idx] if head_mask is not None else No โ
โ โ
โ <path>/.conda/envs/megatron/lib/python3.8/site-packages/torch/nn/modules/module.py โ
โ :1501 in _call_impl โ
โ โ
โ 1498 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ
โ 1499 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ
โ 1500 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ
โ โฑ 1501 โ โ โ return forward_call(*args, **kwargs) โ
โ 1502 โ โ # Do not call functions when jit is used โ
โ 1503 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ
โ 1504 โ โ backward_pre_hooks = [] โ
โ โ
โ <path>/.conda/envs/megatron/lib/python3.8/site-packages/transformers/models/nllb_m โ
โ oe/modeling_nllb_moe.py:701 in forward โ
โ โ
โ 698 โ โ โ
โ 699 โ โ hidden_states = self.ff_layer_norm(hidden_states) โ
โ 700 โ โ if self.is_sparse: โ
โ โฑ 701 โ โ โ hidden_states, router_states = self.ffn(hidden_states, attention_mask) โ
โ 702 โ โ else: โ
โ 703 โ โ โ hidden_states = self.ffn(hidden_states) โ
โ 704 โ โ hidden_states = self.ff_dropout(hidden_states) โ
โ โ
โ <path>/.conda/envs/megatron/lib/python3.8/site-packages/torch/nn/modules/module.py โ
โ :1501 in _call_impl โ
โ โ
โ 1498 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ
โ 1499 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ
โ 1500 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ
โ โฑ 1501 โ โ โ return forward_call(*args, **kwargs) โ
โ 1502 โ โ # Do not call functions when jit is used โ
โ 1503 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ
โ 1504 โ โ backward_pre_hooks = [] โ
โ โ
โ <path>/.conda/envs/megatron/lib/python3.8/site-packages/transformers/models/nllb_m โ
โ oe/modeling_nllb_moe.py:474 in forward โ
โ โ
โ 471 โ โ top_1_mask, router_probs = self.router(hidden_states, padding_mask) โ
โ 472 โ โ router_mask = router_probs.bool() โ
โ 473 โ โ hidden_states = hidden_states.reshape((batch_size * sequence_length), hidden_dim โ
โ โฑ 474 โ โ masked_hidden_states = torch.einsum("bm,be->ebm", hidden_states, router_mask) โ
โ 475 โ โ for idx, expert in enumerate(self.experts.values()): โ
โ 476 โ โ โ token_indices = router_mask[:, idx] โ
โ 477 โ โ โ combining_weights = router_probs[token_indices, idx] โ
โ โ
โ <path>/.conda/envs/megatron/lib/python3.8/site-packages/torch/functional.py:378 in โ
โ einsum โ
โ โ
โ 375 โ if len(operands) <= 2 or not opt_einsum.enabled: โ
โ 376 โ โ # the path for contracting 0 or 1 time(s) is already optimized โ
โ 377 โ โ # or the user has disabled using opt_einsum โ
โ โฑ 378 โ โ return _VF.einsum(equation, operands) # type: ignore[attr-defined] โ
โ 379 โ โ
โ 380 โ path = None โ
โ 381 โ if opt_einsum.is_available(): โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and
cuda:0!
```
### Expected behavior
A list of translated text.
The following code contains a workaround to prevent certain module splits and moves certain modules to the same device as the input in order to run the inference without errors.
#### Code
```python
import torch
from accelerate.big_modeling import infer_auto_device_map, init_empty_weights
from transformers import AutoConfig, AutoModelForSeq2SeqLM, AutoTokenizer
def main():
model_name = "facebook/nllb-moe-54b"
config = AutoConfig.from_pretrained(model_name)
with init_empty_weights():
model = AutoModelForSeq2SeqLM.from_config(config)
model.tie_weights()
device_map = infer_auto_device_map(
model,
# Force splits model.encoder into separate layers and devices
max_memory={0: "6GIB", 1: "30GIB", 2: "30GIB", 3: "30GIB"},
no_split_module_classes=model._no_split_modules
+ ["NllbMoeEncoderLayer", "NllbMoeDecoderLayer"],
dtype="int8",
)
# Demonstrate that only "model.encoder.layer_norm" and "model.encoder.embed_tokens"
# needs to be on the same device as the input
for module, device in device_map.items():
if module in {"model.encoder.layer_norm", "model.encoder.embed_tokens"}:
if device != 0:
device_map[module] = 0
else:
if device == 0:
device_map[module] = 1
tokenizer = AutoTokenizer.from_pretrained(model_name, cache_dir=cache_dir)
model = AutoModelForSeq2SeqLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map=device_map, # Use the custom device map
load_in_8bit=True,
)
batched_input = [
'We now have 4-month-old mice that are non-diabetic that used to be diabetic," he added.',
"Dr. Ehud Ur, professor of medicine at Dalhousie University in Halifax, Nova Scotia and chair of the clinical and scientific division of the Canadian Diabetes Association cautioned that the research is still in its early days."
"Like some other experts, he is skeptical about whether diabetes can be cured, noting that these findings have no relevance to people who already have Type 1 diabetes."
"On Monday, Sara Danius, permanent secretary of the Nobel Committee for Literature at the Swedish Academy, publicly announced during a radio program on Sveriges Radio in Sweden the committee, unable to reach Bob Dylan directly about winning the 2016 Nobel Prize in Literature, had abandoned its efforts to reach him.",
'Danius said, "Right now we are doing nothing. I have called and sent emails to his closest collaborator and received very friendly replies. For now, that is certainly enough."',
"Previously, Ring's CEO, Jamie Siminoff, remarked the company started when his doorbell wasn't audible from his shop in his garage.",
]
inputs = tokenizer(batched_input, return_tensors="pt", padding=True)
for i in inputs:
if torch.is_tensor(inputs[i]):
inputs[i] = inputs[i].to("cuda:0")
translated_tokens = model.generate(
**inputs, forced_bos_token_id=tokenizer.lang_code_to_id["fra_Latn"]
)
outputs = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)
print(outputs)
if __name__ == "__main__":
main()
```
Output:
```
['Nous avons maintenant des souris de 4 mois qui ne sont pas diabรฉtiques mais qui l\'รฉtaient", a-t-il ajoutรฉ.', "Le Dr Ehud Ur, professeur de mรฉdecine ร l'Universitรฉ Dalhousie ร Halifax, en Nouvelle-รcosse, et prรฉsident de la division clinique et scientifique de l'Association canadienne du diabรจte, a averti que la recherche en รฉtait encore ร ses dรฉbuts. Comme d'autres experts, il est sceptique quant ร la possibilitรฉ de guรฉrir le diabรจte, notant que ces rรฉsultats n'ont aucune pertinence pour les personnes atteintes de diabรจte de type 1.", 'Danius a dรฉclarรฉ: "Pour le moment, nous ne faisons rien. J\'ai appelรฉ et envoyรฉ des courriels ร son plus proche collaborateur et j\'ai reรงu des rรฉponses trรจs amicales. Pour l\'instant, c\'est certainement suffisant".', "Auparavant, le PDG de Ring, Jamie Siminoff, a dรฉclarรฉ que la sociรฉtรฉ avait commencรฉ lorsque sa sonnette n'รฉtait pas audible depuis son magasin dans son garage."]
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23385/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/23385/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23384
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23384/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23384/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23384/events
|
https://github.com/huggingface/transformers/pull/23384
| 1,711,295,413 |
PR_kwDOCUB6oc5Qk8GQ
| 23,384 |
Fixed FLAVA tensor masking
|
{
"login": "amariucaitheodor",
"id": 32778667,
"node_id": "MDQ6VXNlcjMyNzc4NjY3",
"avatar_url": "https://avatars.githubusercontent.com/u/32778667?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amariucaitheodor",
"html_url": "https://github.com/amariucaitheodor",
"followers_url": "https://api.github.com/users/amariucaitheodor/followers",
"following_url": "https://api.github.com/users/amariucaitheodor/following{/other_user}",
"gists_url": "https://api.github.com/users/amariucaitheodor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amariucaitheodor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amariucaitheodor/subscriptions",
"organizations_url": "https://api.github.com/users/amariucaitheodor/orgs",
"repos_url": "https://api.github.com/users/amariucaitheodor/repos",
"events_url": "https://api.github.com/users/amariucaitheodor/events{/privacy}",
"received_events_url": "https://api.github.com/users/amariucaitheodor/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23384). All of your documentation changes will be reflected on that endpoint.",
"Hi @amariucaitheodor, thanks for reporting the issue and for opening this PR to resolve it! ๐ \r\n\r\nGoing through the code, I think we can simplify the original logic a bit, which would remove the need for these additional checks: we can simply remove the `sequence_for_text = sequence_for_text[pos_mask]` and `sequence_for_image = sequence_for_image[pos_mask]` blocks. \r\n\r\nAs either: \r\n* `pos_mask` is not `None` - then `multimodal_masked_embeddings` will have been masked and `sequence_for_image = multimodal_masked_embeddings` or\r\n* `pos_mask` is `None` - then `multimodal_masked_embeddings` won't have been masked and `sequence_for_image = multimodal_masked_embeddings`\r\n\r\nI noticed two additional related pieces which would be great to add to this PR too: \r\n* `bool_masked_pos` isn't masked in the `ITM Loss` loss block, and should be after `mim_labels`\r\n* We don't need the `if multimodal_masked_embeddings is not None:` check on L1949 - `multimodal_masked_embeddings` is never `None` in this ITM loss block. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @amariucaitheodor, are you still working on this PR? We'll want to add these changes, and merging in this branch means you'll get the contribution :) ",
"Hello @amyeroberts, thank you for the additions and reminder! I can push my changes to GitHub around the 22nd of June. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,690 | 1,690 |
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #23378
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23384/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23384/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23384",
"html_url": "https://github.com/huggingface/transformers/pull/23384",
"diff_url": "https://github.com/huggingface/transformers/pull/23384.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23384.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23383
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23383/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23383/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23383/events
|
https://github.com/huggingface/transformers/issues/23383
| 1,711,143,821 |
I_kwDOCUB6oc5l_fuN
| 23,383 |
Appending mapped dataset to list changes previous elements of a list
|
{
"login": "surya-narayanan",
"id": 17240858,
"node_id": "MDQ6VXNlcjE3MjQwODU4",
"avatar_url": "https://avatars.githubusercontent.com/u/17240858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/surya-narayanan",
"html_url": "https://github.com/surya-narayanan",
"followers_url": "https://api.github.com/users/surya-narayanan/followers",
"following_url": "https://api.github.com/users/surya-narayanan/following{/other_user}",
"gists_url": "https://api.github.com/users/surya-narayanan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/surya-narayanan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/surya-narayanan/subscriptions",
"organizations_url": "https://api.github.com/users/surya-narayanan/orgs",
"repos_url": "https://api.github.com/users/surya-narayanan/repos",
"events_url": "https://api.github.com/users/surya-narayanan/events{/privacy}",
"received_events_url": "https://api.github.com/users/surya-narayanan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"update- the error seems to be in the use of the lambda function, no idea why",
"@surya-narayanan This isn't an issue specific to `datasets` - it's a Python behaviour due to binding of the `x` var when then `lambda` function is first defined:\r\nhttps://docs.python.org/3/faq/programming.html#why-do-lambdas-defined-in-a-loop-with-different-values-all-return-the-same-result\r\n\r\nFor future issues experienced when using `datasets`, could you make sure to open them under the [datasets repo](https://github.com/huggingface/datasets)? ",
"Great, thanks :) "
] | 1,684 | 1,684 | 1,684 |
NONE
| null |
### System Info
Hi,
I've been trying to tokenize a dataset with different tokenizers and store it, but in doing so, am running into a bug. The general idea is that appending to a list of datasets, seems to modify previous elements.
A code notebook is here: https://colab.research.google.com/drive/1ljMwBqzCe1fHffBcPP2py9IJMrocoNIU
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
https://colab.research.google.com/drive/1ljMwBqzCe1fHffBcPP2py9IJMrocoNIU
### Expected behavior
Appending to the list of datasets shouldn't modify previous elements of that list.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23383/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23383/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23382
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23382/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23382/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23382/events
|
https://github.com/huggingface/transformers/pull/23382
| 1,711,026,736 |
PR_kwDOCUB6oc5QkC2-
| 23,382 |
Debug example code for MegaForCausalLM
|
{
"login": "Tylersuard",
"id": 41713505,
"node_id": "MDQ6VXNlcjQxNzEzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/41713505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tylersuard",
"html_url": "https://github.com/Tylersuard",
"followers_url": "https://api.github.com/users/Tylersuard/followers",
"following_url": "https://api.github.com/users/Tylersuard/following{/other_user}",
"gists_url": "https://api.github.com/users/Tylersuard/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tylersuard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tylersuard/subscriptions",
"organizations_url": "https://api.github.com/users/Tylersuard/orgs",
"repos_url": "https://api.github.com/users/Tylersuard/repos",
"events_url": "https://api.github.com/users/Tylersuard/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tylersuard/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@amyeroberts I ran \"make style\" in the main Transformers directory and then pushed the changes, and every test now fails, including the ones that passed previously. What am I doing wrong?",
"@Tylersuard Hmmmm..... OK, I'm not exactly sure what happened - I suspect there might be a mismatch in installed package versions. Here's how I would try to resolve:\r\n\r\nFirst undo the most recent changes added to the unrelated files: \r\n* Undo the last two commits: `git revert --hard HEAD~2`\r\n* Push these changes to the PR: `git push -f`\r\n\r\nThen get your branch in sync with `main`: \r\n* Get the latest version of main: `git checkout main && git fetch upstream main && git rebase upstream/main`\r\n* Install latest formatting settings `pip install -e \".[quality]\"`\r\n* Rebase main onto this branch `git checkout patch-1 && git rebase upstream/main`\r\n* Push these changes to the PR (you'll have to force): `git push --force`\r\n* Make any style changes `make style` \r\n* Commit changes made (should just be to the modeling_mega.py file): `git add src/transformers/models/mega/modeling_mega.py && git commit -m \"Fix up\" && git push`",
"@amyeroberts Very clear instructions, thank you!",
"@amyeroberts Ok, all done! I do not have write access, so I can't merge the PR"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
set ignore_mismatched_sizes=True in model loading code for MegaForCausalLM so that the example code runs without errors.
# What does this PR do?
Fixes # 22974
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. https://github.com/huggingface/transformers/issues/22974
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@amyeroberts
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23382/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23382/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23382",
"html_url": "https://github.com/huggingface/transformers/pull/23382",
"diff_url": "https://github.com/huggingface/transformers/pull/23382.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23382.patch",
"merged_at": 1684749194000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23381
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23381/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23381/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23381/events
|
https://github.com/huggingface/transformers/issues/23381
| 1,710,935,813 |
I_kwDOCUB6oc5l-s8F
| 23,381 |
IndexError: index out of range in self
|
{
"login": "Bateoriginal",
"id": 25548775,
"node_id": "MDQ6VXNlcjI1NTQ4Nzc1",
"avatar_url": "https://avatars.githubusercontent.com/u/25548775?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bateoriginal",
"html_url": "https://github.com/Bateoriginal",
"followers_url": "https://api.github.com/users/Bateoriginal/followers",
"following_url": "https://api.github.com/users/Bateoriginal/following{/other_user}",
"gists_url": "https://api.github.com/users/Bateoriginal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bateoriginal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bateoriginal/subscriptions",
"organizations_url": "https://api.github.com/users/Bateoriginal/orgs",
"repos_url": "https://api.github.com/users/Bateoriginal/repos",
"events_url": "https://api.github.com/users/Bateoriginal/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bateoriginal/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @kashif ",
"@Bateoriginal thank you for the report. Since you are using the model weights for the tourism dataset the cardinality of the embedding layers is fixed to that from this dataset. Thus it seems you are passing it some integer id which is too large with respect to what the model expects. \r\n\r\nMay I ask if you are training with the tourism dataset?",
"\r\nThank you for your response.\r\n\r\nI'm currently working with retail transaction data.\r\n\r\nIt's reassuring to understand that the number of elements (cardinality) in the embedding layers remains constant.\r\n\r\nCould you elaborate on how the cardinality of these embedding layers is determined based on the original data used for training? Is it possible to predict the cardinality just by examining the dimensions of the batch?\r\n\r\nWhen you refer to an 'integer ID', does this pertain to static categorical/real features or something else?\r\n",
"@Bateoriginal yes the embedding layer will output a vector for a specific number of ids typically from 0, ... cardinality-1 and if given an id outside this range it will error out as it is internally a mapping from these ids to a vector.\r\n\r\nthis cardinality as mentioned is set for your specific problem and dataset and corresponds to static covariates, and thus the cardinality is not something that is predictable from the batch as a batch is some random collection of some time series within your batch and also you need to specify it when initializing your model.\r\n\r\nSo the cardinality is chosen at the start and has to remain fixed for the duration of the model's life cycle. This is both good and bad... it's good because for example this way the model can be given information about say the id of each time series in a dataset but it is bad as it constrains your model from only being able to do predictions on time series with a known id...\r\n\r\nIn any case, i encourage you to initialize a model with the configurations of your dataset rather than loading a model trained on the tourism dataset. If you can have a look at the blog post: https://huggingface.co/blog/time-series-transformers and try to replicate it for your dataset\r\n\r\nHopefully, that helps!",
"Thank you for your time! ",
"so what i meant to say is that you can chose to train your model without using the categorical covariates and sometimes such a model performs (paradoxically) better than with categorical covariates.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,687 | 1,687 |
NONE
| null |
My batch shape looked like below:
- past_values: torch.Size([17, 35])
- past_time_features: torch.Size([17, 35, 9])
- past_observed_mask: torch.Size([17, 35])
- static_categorical_features: torch.Size([17, 4])
- static_real_features: torch.Size([17, 2])
- future_values: torch.Size([17, 7])
- future_time_features: torch.Size([17, 7, 9])
I run
```
model = TimeSeriesTransformerModel.from_pretrained("huggingface/time-series-transformer-tourism-monthly")
# during training, one provides both past and future values
# as well as possible additional features
outputs = model(
past_values=batchTrain["past_values"],
past_time_features=batchTrain["past_time_features"],
past_observed_mask=batchTrain["past_observed_mask"],
static_categorical_features=batchTrain["static_categorical_features"],
static_real_features=batchTrain["static_real_features"],
future_values=batchTrain["future_values"],
future_time_features=batchTrain["future_time_features"],
)
last_hidden_state = outputs.last_hidden_state
```
```
Below is the error message.
Some weights of the model checkpoint at huggingface/time-series-transformer-tourism-monthly were not used when initializing TimeSeriesTransformerModel: ['parameter_projection.proj.2.weight', 'parameter_projection.proj.2.bias', 'parameter_projection.proj.1.weight', 'parameter_projection.proj.0.bias', 'parameter_projection.proj.0.weight', 'parameter_projection.proj.1.bias']
- This IS expected if you are initializing TimeSeriesTransformerModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TimeSeriesTransformerModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Output exceeds the [size limit](command:workbench.action.openSettings?[). Open the full output data [in a text editor](command:workbench.action.openLargeOutput?fd064990-38d7-46ff-ab2b-610b0cd790a4)
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
Cell In[34], line 53
48 model = TimeSeriesTransformerModel.from_pretrained("huggingface/time-series-transformer-tourism-monthly")
50 # during training, one provides both past and future values
51 # as well as possible additional features
---> 53 outputs = model(
54 past_values=batchTrain["past_values"],
55 past_time_features=batchTrain["past_time_features"],
56 past_observed_mask=batchTrain["past_observed_mask"],
57 static_categorical_features=batchTrain["static_categorical_features"],
58 static_real_features=batchTrain["static_real_features"],
59 future_values=batchTrain["future_values"],
60 future_time_features=batchTrain["future_time_features"],
61 )
63 last_hidden_state = outputs.last_hidden_state
File ~/opt/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File ~/opt/anaconda3/lib/python3.8/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py:1417, in TimeSeriesTransformerModel.forward(self, past_values, past_time_features, past_observed_mask, static_categorical_features, static_real_features, future_values, future_time_features, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, output_hidden_states, output_attentions, use_cache, return_dict)
1414 use_cache = use_cache if use_cache is not None else self.config.use_cache
1415 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-> 1417 transformer_inputs, loc, scale, static_feat = self.create_network_inputs(
1418 past_values=past_values,
1419 past_time_features=past_time_features,
1420 past_observed_mask=past_observed_mask,
1421 static_categorical_features=static_categorical_features,
1422 static_real_features=static_real_features,
1423 future_values=future_values,
1424 future_time_features=future_time_features,
1425 )
1427 if encoder_outputs is None:
1428 enc_input = transformer_inputs[:, : self.config.context_length, ...]
File ~/opt/anaconda3/lib/python3.8/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py:1324, in TimeSeriesTransformerModel.create_network_inputs(self, past_values, past_time_features, static_categorical_features, static_real_features, past_observed_mask, future_values, future_time_features)
1322 static_feat = torch.cat((static_real_features, static_feat), dim=1)
1323 if static_categorical_features is not None:
-> 1324 embedded_cat = self.embedder(static_categorical_features)
1325 static_feat = torch.cat((embedded_cat, static_feat), dim=1)
1326 expanded_static_feat = static_feat.unsqueeze(1).expand(-1, time_feat.shape[1], -1)
File ~/opt/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File ~/opt/anaconda3/lib/python3.8/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py:76, in TimeSeriesFeatureEmbedder.forward(self, features)
72 else:
73 cat_feature_slices = [features]
75 return torch.cat(
---> 76 [
77 embed(cat_feature_slice.squeeze(-1))
78 for embed, cat_feature_slice in zip(self.embedders, cat_feature_slices)
79 ],
80 dim=-1,
81 )
File ~/opt/anaconda3/lib/python3.8/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py:77, in <listcomp>(.0)
72 else:
73 cat_feature_slices = [features]
75 return torch.cat(
76 [
---> 77 embed(cat_feature_slice.squeeze(-1))
78 for embed, cat_feature_slice in zip(self.embedders, cat_feature_slices)
79 ],
80 dim=-1,
81 )
File ~/opt/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File ~/opt/anaconda3/lib/python3.8/site-packages/torch/nn/modules/sparse.py:162, in Embedding.forward(self, input)
161 def forward(self, input: Tensor) -> Tensor:
--> 162 return F.embedding(
163 input, self.weight, self.padding_idx, self.max_norm,
...
2208 # remove once script supports set_grad_enabled
2209 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 2210 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
IndexError: index out of range in self
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23381/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23381/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23380
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23380/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23380/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23380/events
|
https://github.com/huggingface/transformers/issues/23380
| 1,710,842,016 |
I_kwDOCUB6oc5l-WCg
| 23,380 |
TextToVideo tool raising name 'init_empty_weights' is not defined error
|
{
"login": "freddyaboulton",
"id": 41651716,
"node_id": "MDQ6VXNlcjQxNjUxNzE2",
"avatar_url": "https://avatars.githubusercontent.com/u/41651716?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/freddyaboulton",
"html_url": "https://github.com/freddyaboulton",
"followers_url": "https://api.github.com/users/freddyaboulton/followers",
"following_url": "https://api.github.com/users/freddyaboulton/following{/other_user}",
"gists_url": "https://api.github.com/users/freddyaboulton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/freddyaboulton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/freddyaboulton/subscriptions",
"organizations_url": "https://api.github.com/users/freddyaboulton/orgs",
"repos_url": "https://api.github.com/users/freddyaboulton/repos",
"events_url": "https://api.github.com/users/freddyaboulton/events{/privacy}",
"received_events_url": "https://api.github.com/users/freddyaboulton/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Thanks for reporting @freddyaboulton! Do you have `accelerate` installed? If so, which version?\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
### System Info
2023-05-15 21:13:50.400043: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/transformers/commands/env.py:63: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.29.1
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu118 (False)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (cpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@LysandreJik
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import Tool, OpenAiAgent, HfAgent
agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder")
output = agent.run("Please make a video of `prompt`", prompt="a man eating spaghetti")
```
```bash
/usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:101 in โ
โ evaluate_ast โ
โ โ
โ 98 โ โ return evaluate_assign(expression, state, tools) โ
โ 99 โ elif isinstance(expression, ast.Call): โ
โ 100 โ โ # Function call -> we return the value of the function call โ
โ โฑ 101 โ โ return evaluate_call(expression, state, tools) โ
โ 102 โ elif isinstance(expression, ast.Constant): โ
โ 103 โ โ # Constant -> just return the value โ
โ 104 โ โ return expression.value โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:167 in โ
โ evaluate_call โ
โ โ
โ 164 โ # Todo deal with args โ
โ 165 โ args = [evaluate_ast(arg, state, tools) for arg in call.args] โ
โ 166 โ kwargs = {keyword.arg: evaluate_ast(keyword.value, state, tools) for keyword in call โ
โ โฑ 167 โ return func(*args, **kwargs) โ
โ 168 โ
โ 169 โ
โ 170 def evaluate_subscript(subscript, state, tools): โ
โ โ
โ /root/.cache/huggingface/modules/transformers_modules/huggingface-tools/text-to-video/15f8f33935 โ
โ f9653aa806382d1536f8a48a0c6cc0/text_to_video.py:45 in __call__ โ
โ โ
โ 42 โ โ
โ 43 โ def __call__(self, prompt, seconds=2): โ
โ 44 โ โ if not self.is_initialized: โ
โ โฑ 45 โ โ โ self.setup() โ
โ 46 โ โ โ
โ 47 โ โ return self.pipeline(prompt, num_frames=8 * seconds).frames โ
โ 48 โ
โ โ
โ /root/.cache/huggingface/modules/transformers_modules/huggingface-tools/text-to-video/15f8f33935 โ
โ f9653aa806382d1536f8a48a0c6cc0/text_to_video.py:36 in setup โ
โ โ
โ 33 โ โ if self.device is None: โ
โ 34 โ โ โ self.device = get_default_device() โ
โ 35 โ โ โ
โ โฑ 36 โ โ self.pipeline = DiffusionPipeline.from_pretrained( โ
โ 37 โ โ โ self.default_checkpoint, variant="fp16" โ
โ 38 โ โ ) โ
โ 39 โ โ self.pipeline.to(self.device) โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/diffusers/pipelines/pipeline_utils.py:1039 in โ
โ from_pretrained โ
โ โ
โ 1036 โ โ โ โ loaded_sub_model = passed_class_obj[name] โ
โ 1037 โ โ โ else: โ
โ 1038 โ โ โ โ # load sub model โ
โ โฑ 1039 โ โ โ โ loaded_sub_model = load_sub_model( โ
โ 1040 โ โ โ โ โ library_name=library_name, โ
โ 1041 โ โ โ โ โ class_name=class_name, โ
โ 1042 โ โ โ โ โ importable_classes=importable_classes, โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/diffusers/pipelines/pipeline_utils.py:445 in โ
โ load_sub_model โ
โ โ
โ 442 โ โ
โ 443 โ # check if the module is in a subdirectory โ
โ 444 โ if os.path.isdir(os.path.join(cached_folder, name)): โ
โ โฑ 445 โ โ loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwar โ
โ 446 โ else: โ
โ 447 โ โ # else load from the root directory โ
โ 448 โ โ loaded_sub_model = load_method(cached_folder, **loading_kwargs) โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py:2608 in from_pretrained โ
โ โ
โ 2605 โ โ โ logger.info("Detected DeepSpeed ZeRO-3: activating zero.init() for this mode โ
โ 2606 โ โ โ init_contexts = [deepspeed.zero.Init(config_dict_or_path=deepspeed_config()) โ
โ 2607 โ โ elif load_in_8bit or low_cpu_mem_usage: โ
โ โฑ 2608 โ โ โ init_contexts.append(init_empty_weights()) โ
โ 2609 โ โ โ
โ 2610 โ โ with ContextManagers(init_contexts): โ
โ 2611 โ โ โ model = cls(config, *model_args, **model_kwargs) โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
NameError: name 'init_empty_weights' is not defined
```
Same behavior calling the tool directly,
```python
from transformers.tools import load_tool
tool = load_tool("huggingface-tools/text-to-video")
tool(prompt="a man eating spaghetti")
```
### Expected behavior
The tool does not error
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23380/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23380/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23379
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23379/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23379/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23379/events
|
https://github.com/huggingface/transformers/pull/23379
| 1,710,681,840 |
PR_kwDOCUB6oc5Qi49M
| 23,379 |
[AutoModel] fix `torch_dtype=auto` in `from_pretrained`
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The only change is that `torch_dtype=\"auto\"` remains in `kwargs` and it wasn't before.\r\n\r\nI have just flipped around the `_copy` vs `_orig` as it looked simpler to read that way. There is no functional change in that part of the code.\r\n\r\nProbably could just set a flag of `is_torch_dtype_auto = True` instead of copying `kwargs` - I just thought that perhaps down the road other entries might need a special handling. Let me know if you prefer that I recode to use the flag instead. It surely would be cleaner I think.",
"No no, that works as is."
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
This PR:
1. fixes the case of `torch_dtype=auto` in `AutoModel.from_pretrained` which got unintentionally stripped in https://github.com/huggingface/transformers/pull/21524 - now `torch_dtype=auto` gets always passed on to the `from_pretrained` method of the resolved class.
2. adds a test
Fixes: https://github.com/huggingface/transformers/issues/23357
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23379/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23379/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23379",
"html_url": "https://github.com/huggingface/transformers/pull/23379",
"diff_url": "https://github.com/huggingface/transformers/pull/23379.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23379.patch",
"merged_at": 1684257702000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23378
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23378/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23378/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23378/events
|
https://github.com/huggingface/transformers/issues/23378
| 1,710,652,602 |
I_kwDOCUB6oc5l9ny6
| 23,378 |
FLAVA tensors are masked twice, forward pass fails
|
{
"login": "amariucaitheodor",
"id": 32778667,
"node_id": "MDQ6VXNlcjMyNzc4NjY3",
"avatar_url": "https://avatars.githubusercontent.com/u/32778667?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amariucaitheodor",
"html_url": "https://github.com/amariucaitheodor",
"followers_url": "https://api.github.com/users/amariucaitheodor/followers",
"following_url": "https://api.github.com/users/amariucaitheodor/following{/other_user}",
"gists_url": "https://api.github.com/users/amariucaitheodor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amariucaitheodor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amariucaitheodor/subscriptions",
"organizations_url": "https://api.github.com/users/amariucaitheodor/orgs",
"repos_url": "https://api.github.com/users/amariucaitheodor/repos",
"events_url": "https://api.github.com/users/amariucaitheodor/events{/privacy}",
"received_events_url": "https://api.github.com/users/amariucaitheodor/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"A similar thing happens at line 1969 because of line 1956 (`mim_labels` changes shape):\r\n```python\r\n mim_labels[bool_masked_pos.ne(True)] = self.ce_ignore_index\r\nIndexError: The shape of the mask [14, 196] at index 0 does not match the shape of the indexed tensor [13, 196] at index 0\r\n```\r\n\r\nThe fix could be adding `bool_masked_pos = bool_masked_pos[pos_mask]` between lines 1969 and 1968.",
"Same for line 1988, fix could be `if pos_mask is not None and sequence_for_text.size(0) == pos_mask.size(0):`",
"Tried the fixes and FLAVA runs. I wonder how no one else noticed ๐ค",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,690 | 1,690 |
NONE
| null |
https://github.com/huggingface/transformers/blob/cf11493dce0a1d22446efe0d6c4ade02fd928e50/src/transformers/models/flava/modeling_flava.py#L1965
The line above is the second time this tensor is masked if the previous ITM logic happens (line 1950), resulting e.g. `IndexError: The shape of the mask [14] at index 0 does not match the shape of the indexed tensor [13, 196, 768] at index 0`
The fix could be something like `if pos_mask is not None and sequence_for_image.size(0) == pos_mask.size(0)` on line 1964.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23378/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23377
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23377/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23377/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23377/events
|
https://github.com/huggingface/transformers/issues/23377
| 1,710,530,040 |
I_kwDOCUB6oc5l9J34
| 23,377 |
Default Models of the pipeline function
|
{
"login": "DiogenesBR",
"id": 16890195,
"node_id": "MDQ6VXNlcjE2ODkwMTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/16890195?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DiogenesBR",
"html_url": "https://github.com/DiogenesBR",
"followers_url": "https://api.github.com/users/DiogenesBR/followers",
"following_url": "https://api.github.com/users/DiogenesBR/following{/other_user}",
"gists_url": "https://api.github.com/users/DiogenesBR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DiogenesBR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DiogenesBR/subscriptions",
"organizations_url": "https://api.github.com/users/DiogenesBR/orgs",
"repos_url": "https://api.github.com/users/DiogenesBR/repos",
"events_url": "https://api.github.com/users/DiogenesBR/events{/privacy}",
"received_events_url": "https://api.github.com/users/DiogenesBR/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @DiogenesBR, thanks for raising this issue! \r\n\r\nSimilar to answer in the linked video, at the moment this info is best found by looking at the source code. For example, for the [speech-to-text tool](https://github.com/huggingface/transformers/blob/918a06e25dfd6f79a20b6f07f63598c71e440161/src/transformers/tools/speech_to_text.py#L22), the checkpoint used is [openai/whisper-base](https://huggingface.co/openai/whisper-base). \r\n\r\nWould you be interested in opening a PR to add this information? \r\n\r\ncc @MKhalusova @stevhliu ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,687 | 1,687 |
NONE
| null |
I made issue a on the NLP Course (https://github.com/huggingface/course/issues/561):
> In the video:
> Chapter 1 Live Session with Sylvain
> https://youtu.be/aV4wfnIakSQ?t=928
>
> There a question of what are the default models in the pipeline library.
> His answer is to look at:
> https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/__init__.py
>
> I think that this information should be in the chapter 1, also on the documentation
Then I was asked to make a Issue here too.
After looking the Transformers Agents also have this problem

For a instance Speech to text uses Whisper but it don't say what exact version of Whisper
There more than 3000 versions of Whisper on the Models Directory
https://huggingface.co/models?search=whisper
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23377/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23377/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23376
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23376/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23376/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23376/events
|
https://github.com/huggingface/transformers/pull/23376
| 1,710,433,755 |
PR_kwDOCUB6oc5QiC5n
| 23,376 |
[`SAM`] fix sam slow test
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Fix for the notebook: https://github.com/huggingface/notebooks/pull/371",
"_The documentation is not available anymore as the PR was closed or merged._",
"Indeed the tests were passing before because the processor was force unsqueezeing the boxes here: https://github.com/huggingface/transformers/blob/d765717c76026281f2fb27ddc44fa3636306bb48/src/transformers/models/sam/processing_sam.py#L141 \r\n\r\n> you can't have floating integers :) \r\n\r\nHahah yes, thanks for noticing! Copilot does some bad job sometimes ... will update that as well",
"Thanks a lot @amyeroberts !",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23376). All of your documentation changes will be reflected on that endpoint."
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes the slow tests that were failing due to https://github.com/huggingface/transformers/pull/23295
In fact, in the slow test that we have designed, we forgot to use the correct format for the input bounding boxes
Will address a PR on `notebooks` to address the changes in the example notebook
cc @ydshieh @amyeroberts
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23376/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23376/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23376",
"html_url": "https://github.com/huggingface/transformers/pull/23376",
"diff_url": "https://github.com/huggingface/transformers/pull/23376.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23376.patch",
"merged_at": 1684326464000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23375
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23375/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23375/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23375/events
|
https://github.com/huggingface/transformers/pull/23375
| 1,710,350,027 |
PR_kwDOCUB6oc5Qhwqy
| 23,375 |
Add bark
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Superseded by #24086 "
] | 1,684 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
## What does this PR do?
Fixes #
## TODO
- [x] Add autoregressive text model
- [x] Add autoregressive coarse model
- [x] Add non-autoregressive fine model
- [x] Check text weights
- [ ] Check coarse weights
- [ ] Check fine weights
- [ ] Add Bark model / config -> what design for concatenating the three models?
- [ ] Generation code
- [ ] Update with transformers Encodec checkpoint
- [ ] Docs
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23375/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23375/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23375",
"html_url": "https://github.com/huggingface/transformers/pull/23375",
"diff_url": "https://github.com/huggingface/transformers/pull/23375.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23375.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23374
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23374/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23374/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23374/events
|
https://github.com/huggingface/transformers/pull/23374
| 1,710,235,602 |
PR_kwDOCUB6oc5QhXv7
| 23,374 |
Skip failing `AlignModelTest::test_multi_gpu_data_parallel_forward`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23374). All of your documentation changes will be reflected on that endpoint."
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
`tests/models/align/test_modeling_align.py::AlignModelTest::test_multi_gpu_data_parallel_forward` starts to fail after we switch to `torch+cu118`. If I install back with `torch+cu117`, it passes again.
This test uses `torch.nn.DataParallel` which is not recommended (despite not deprecated yet). The error is pure CUDA thing for which I have no knowledge. Combing all the above facts + the usage of this model, let's just skip this particular test for `AlignModelTest`.
(This failing test cause the other 18 tests to fail due to the CUDA is in a bad state)
```bash
E RuntimeError: Caught RuntimeError in replica 0 on device 0.
E Original Traceback (most recent call last):
E File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/parallel_apply.py", line 64, in _worker
E output = module(*input, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
E return forward_call(*args, **kwargs)
E File "/transformers/src/transformers/models/align/modeling_align.py", line 1596, in forward
E vision_outputs = self.vision_model(
E File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
E return forward_call(*args, **kwargs)
E File "/transformers/src/transformers/models/align/modeling_align.py", line 1395, in forward
E embedding_output = self.embeddings(pixel_values)
E File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
E return forward_call(*args, **kwargs)
E File "/transformers/src/transformers/models/align/modeling_align.py", line 345, in forward
E features = self.convolution(features)
E File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
E return forward_call(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/conv.py", line 463, in forward
E return self._conv_forward(input, self.weight, self.bias)
E File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
E return F.conv2d(input, weight, bias, self.stride,
E RuntimeError: GET was unable to find an engine to execute this computation
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23374/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23374/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23374",
"html_url": "https://github.com/huggingface/transformers/pull/23374",
"diff_url": "https://github.com/huggingface/transformers/pull/23374.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23374.patch",
"merged_at": 1684162018000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23373
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23373/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23373/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23373/events
|
https://github.com/huggingface/transformers/pull/23373
| 1,710,190,603 |
PR_kwDOCUB6oc5QhOBY
| 23,373 |
Update error message when Accelerate isn't installed
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2155169140,
"node_id": "MDU6TGFiZWwyMTU1MTY5MTQw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/trainer",
"name": "trainer",
"color": "2ef289",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23373). All of your documentation changes will be reflected on that endpoint."
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR provides a bit more verbose error when `accelerate` isn't found on an install of `transformers`, as the `Trainer` (on PyTorch) requires Accelerate to be installed.
The error message was changed from:
```python
ImportError: Using the Trainer with PyTorch requires accelerate: Run pip install --upgrade accelerate
```
To be:
```python
Using the `Trainer` with `PyTorch` requires `accelerate>=0.19.0`: Please run `pip install transformers[torch]` or `pip install accelerate -U`
```
Fixes # (issue)
- https://github.com/huggingface/transformers/issues/23323
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik
(@sgugger when you are back)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23373/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23373/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23373",
"html_url": "https://github.com/huggingface/transformers/pull/23373",
"diff_url": "https://github.com/huggingface/transformers/pull/23373.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23373.patch",
"merged_at": 1684336563000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23372
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23372/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23372/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23372/events
|
https://github.com/huggingface/transformers/pull/23372
| 1,710,091,743 |
PR_kwDOCUB6oc5Qg4kW
| 23,372 |
Use `mkstemp` to replace deprecated `mktemp`
|
{
"login": "ready-research",
"id": 72916209,
"node_id": "MDQ6VXNlcjcyOTE2MjA5",
"avatar_url": "https://avatars.githubusercontent.com/u/72916209?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ready-research",
"html_url": "https://github.com/ready-research",
"followers_url": "https://api.github.com/users/ready-research/followers",
"following_url": "https://api.github.com/users/ready-research/following{/other_user}",
"gists_url": "https://api.github.com/users/ready-research/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ready-research/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ready-research/subscriptions",
"organizations_url": "https://api.github.com/users/ready-research/orgs",
"repos_url": "https://api.github.com/users/ready-research/repos",
"events_url": "https://api.github.com/users/ready-research/events{/privacy}",
"received_events_url": "https://api.github.com/users/ready-research/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @sgugger /@amyeroberts, Can you please add this patch in [huntr](https://www.huntr.dev/bounties/a3867b4e-6701-4418-8c20-3c6e7084a44a/) report. Thanks.",
"@ready-research Should be done now!",
"Is this change going to be included in a release soon?",
"This is being reported as having the fix for https://nvd.nist.gov/vuln/detail/CVE-2023-2800\r\n\r\nIs there an estimate on the time to release?",
"You can install HF from the commit ID with the fix this way:\r\n\r\n```bash\r\n$ pip install --no-cache-dir git+https://github.com/huggingface/transformers.git@80ca924\r\n```\r\nand you should have:\r\n```\r\nCollecting git+https://github.com/huggingface/transformers.git@80ca924\r\n Cloning https://github.com/huggingface/transformers.git (to revision 80ca924) to /tmp/pip-req-build-f13han_v\r\n Running command git clone --filter=blob:none --quiet https://github.com/huggingface/transformers.git /tmp/pip-req-build-f13han_v\r\n WARNING: Did not find branch or tag '80ca924', assuming revision or ref.\r\n Running command git checkout -q 80ca924\r\n Resolved https://github.com/huggingface/transformers.git to commit 80ca924\r\n Installing build dependencies ... done\r\n Getting requirements to build wheel ... done\r\n Preparing metadata (pyproject.toml) ... done\r\nCollecting filelock (from transformers==4.30.0.dev0)\r\n Downloading filelock-3.12.0-py3-none-any.whl (10 kB)\r\nCollecting huggingface-hub<1.0,>=0.14.1 (from transformers==4.30.0.dev0)\r\n Downloading huggingface_hub-0.15.1-py3-none-any.whl (236 kB)\r\n โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 236.8/236.8 kB 48.8 MB/s eta 0:00:00\r\nCollecting numpy>=1.17 (from transformers==4.30.0.dev0)\r\n Downloading numpy-1.24.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.3 MB)\r\n โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 17.3/17.3 MB 132.3 MB/s eta 0:00:00\r\nCollecting packaging>=20.0 (from transformers==4.30.0.dev0)\r\n Downloading packaging-23.1-py3-none-any.whl (48 kB)\r\n โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 48.9/48.9 kB 249.5 MB/s eta 0:00:00\r\nCollecting pyyaml>=5.1 (from transformers==4.30.0.dev0)\r\n Downloading PyYAML-6.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (661 kB)\r\n โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 661.8/661.8 kB 253.8 MB/s eta 0:00:00\r\nCollecting regex!=2019.12.17 (from transformers==4.30.0.dev0)\r\n Downloading regex-2023.6.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (769 kB)\r\n โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 769.9/769.9 kB 311.6 MB/s eta 0:00:00\r\nCollecting requests (from transformers==4.30.0.dev0)\r\n Downloading requests-2.31.0-py3-none-any.whl (62 kB)\r\n โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 62.6/62.6 kB 269.3 MB/s eta 0:00:00\r\nCollecting tokenizers!=0.11.3,<0.14,>=0.11.1 (from transformers==4.30.0.dev0)\r\n Downloading tokenizers-0.13.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.8 MB)\r\n โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 7.8/7.8 MB 160.6 MB/s eta 0:00:00\r\nCollecting tqdm>=4.27 (from transformers==4.30.0.dev0)\r\n Downloading tqdm-4.65.0-py3-none-any.whl (77 kB)\r\n โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 77.1/77.1 kB 277.1 MB/s eta 0:00:00\r\nCollecting fsspec (from huggingface-hub<1.0,>=0.14.1->transformers==4.30.0.dev0)\r\n Downloading fsspec-2023.5.0-py3-none-any.whl (160 kB)\r\n โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 160.1/160.1 kB 304.7 MB/s eta 0:00:00\r\nCollecting typing-extensions>=3.7.4.3 (from huggingface-hub<1.0,>=0.14.1->transformers==4.30.0.dev0)\r\n Downloading typing_extensions-4.6.3-py3-none-any.whl (31 kB)\r\nCollecting charset-normalizer<4,>=2 (from requests->transformers==4.30.0.dev0)\r\n Downloading charset_normalizer-3.1.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (199 kB)\r\n โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 199.2/199.2 kB 312.4 MB/s eta 0:00:00\r\nCollecting idna<4,>=2.5 (from requests->transformers==4.30.0.dev0)\r\n Downloading idna-3.4-py3-none-any.whl (61 kB)\r\n โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 61.5/61.5 kB 269.7 MB/s eta 0:00:00\r\nCollecting urllib3<3,>=1.21.1 (from requests->transformers==4.30.0.dev0)\r\n Downloading urllib3-2.0.2-py3-none-any.whl (123 kB)\r\n โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 123.2/123.2 kB 182.9 MB/s eta 0:00:00\r\nCollecting certifi>=2017.4.17 (from requests->transformers==4.30.0.dev0)\r\n Downloading certifi-2023.5.7-py3-none-any.whl (156 kB)\r\n โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 157.0/157.0 kB 308.0 MB/s eta 0:00:00\r\nBuilding wheels for collected packages: transformers\r\n Building wheel for transformers (pyproject.toml) ... done\r\n Created wheel for transformers: filename=transformers-4.30.0.dev0-py3-none-any.whl size=7079671 sha256=6be8d9585811de7b3573d50a1e9577a90a36b77b73af16c3b1a0e5dabd679f7b\r\n Stored in directory: /tmp/pip-ephem-wheel-cache-rdzrdy92/wheels/45/b0/e3/2eeba5f2822725123eba400b020e96ec93e60e14fa21699a10\r\nSuccessfully built transformers\r\nInstalling collected packages: tokenizers, urllib3, typing-extensions, tqdm, regex, pyyaml, packaging, numpy, idna, fsspec, filelock, charset-normalizer, certifi, requests, huggingface-hub, transformers\r\nSuccessfully installed certifi-2023.5.7 charset-normalizer-3.1.0 filelock-3.12.0 fsspec-2023.5.0 huggingface-hub-0.15.1 idna-3.4 numpy-1.24.3 packaging-23.1 pyyaml-6.0 regex-2023.6.3 requests-2.31.0 tokenizers-0.13.3 tqdm-4.65.0 transformers-4.30.0.dev0 typing-extensions-4.6.3 urllib3-2.0.2\r\n```",
"Do we have any ETA when will we release this security fix? ",
"As indicated on the page, v4.30.0 (released last week) contains the fix."
] | 1,684 | 1,686 | 1,684 |
CONTRIBUTOR
| null |
The `tempfile.mktemp` function is [deprecated](https://docs.python.org/3/library/tempfile.html#tempfile.mktemp) due to [security issues](https://cwe.mitre.org/data/definitions/377.html).
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes Tempfile issue disclosed in [huntr](https://www.huntr.dev/bounties/a3867b4e-6701-4418-8c20-3c6e7084a44a/).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger Can you please review these changes and approve this fix? Thanks.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23372/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23372/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23372",
"html_url": "https://github.com/huggingface/transformers/pull/23372",
"diff_url": "https://github.com/huggingface/transformers/pull/23372.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23372.patch",
"merged_at": 1684231854000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23371
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23371/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23371/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23371/events
|
https://github.com/huggingface/transformers/pull/23371
| 1,709,960,805 |
PR_kwDOCUB6oc5Qgb12
| 23,371 |
Revert "Only add files with modification outside doc blocks"
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"> Thanks for fixing!\r\n> \r\n> Apologies for not catching in the review either. In the nightly CI, have we run all of the doctests - or do we expect there to be any untested pieces of code between the merge of #23327 and this PR?\r\n\r\nOn daily doctest CI, everything is tested :-) - there is no filtration, it just check all files in `utils/documentation_tests.txt`.",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
Reverts huggingface/transformers#23327.
I apologize, but I read Sylvain's messge too quickly and got it completely wrong.
> for now the tests are launched on a file if we modify it, but I would only launch it if docstrings are modified (e.g. check the modifications are correct) to go faster.
That merged PR did the converse instead: it adds a test file if only docstring (instead of only code) are modified.
I will need to create sth like `diff_is_code_only`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23371/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23371/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23371",
"html_url": "https://github.com/huggingface/transformers/pull/23371",
"diff_url": "https://github.com/huggingface/transformers/pull/23371.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23371.patch",
"merged_at": 1684153717000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23370
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23370/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23370/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23370/events
|
https://github.com/huggingface/transformers/pull/23370
| 1,709,936,673 |
PR_kwDOCUB6oc5QgWaD
| 23,370 |
Fix `OwlViTForObjectDetection.image_guided_detection` doc example
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@amyeroberts I know you actually want to give the approval but forgot doing it ๐
.\r\nAs I am a serious man, I would try not to merge without a format approval ๐ "
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
Need to update expected values after #23157
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23370/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23370/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23370",
"html_url": "https://github.com/huggingface/transformers/pull/23370",
"diff_url": "https://github.com/huggingface/transformers/pull/23370.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23370.patch",
"merged_at": 1684153029000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23369
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23369/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23369/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23369/events
|
https://github.com/huggingface/transformers/pull/23369
| 1,709,911,587 |
PR_kwDOCUB6oc5QgQ-9
| 23,369 |
Fix `BigBirdForMaskedLM` doctest
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
Need to update some expected values in the doc example after #23056 (that PR also updated some values in the test file)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23369/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23369/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23369",
"html_url": "https://github.com/huggingface/transformers/pull/23369",
"diff_url": "https://github.com/huggingface/transformers/pull/23369.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23369.patch",
"merged_at": 1684152943000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23368
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23368/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23368/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23368/events
|
https://github.com/huggingface/transformers/issues/23368
| 1,709,873,107 |
I_kwDOCUB6oc5l6pfT
| 23,368 |
RWKV split CPU & GPU results in high perplexity
|
{
"login": "3outeille",
"id": 47445085,
"node_id": "MDQ6VXNlcjQ3NDQ1MDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/47445085?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/3outeille",
"html_url": "https://github.com/3outeille",
"followers_url": "https://api.github.com/users/3outeille/followers",
"following_url": "https://api.github.com/users/3outeille/following{/other_user}",
"gists_url": "https://api.github.com/users/3outeille/gists{/gist_id}",
"starred_url": "https://api.github.com/users/3outeille/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/3outeille/subscriptions",
"organizations_url": "https://api.github.com/users/3outeille/orgs",
"repos_url": "https://api.github.com/users/3outeille/repos",
"events_url": "https://api.github.com/users/3outeille/events{/privacy}",
"received_events_url": "https://api.github.com/users/3outeille/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@younesbelkada Any update ?",
"Hi @3outeille \r\nSadly I didn't had time to check that out, are you still facing the issue with the latest main branch of transformers & accelerate?",
"Hi @younesbelkada, I update transformers & accelerate to the latest release version as shown here: https://github.com/3outeille/hf_rwkv_bug/blob/master/requirements.txt and the bug is still here",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,690 | 1,690 |
MEMBER
| null |
### System Info
Using https://github.com/huggingface/transformers/pull/22797#event-9203076880 PR, I tried to evaluate perplexity on wikitext2 using HuggingFace RWKV but found a weird behavior (gist to reproduce the bug: https://gist.github.com/3outeille/e74ec833ec2800a94325f8dad8e0da3d).
- When model is fully loaded on CPU or GPU, perlexity is fine
- When some block of RWKV are loaded in CPU and GPU, perplexity is high
Any idea ?
### Who can help?
@sgugger, @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://gist.github.com/3outeille/e74ec833ec2800a94325f8dad8e0da3d
### Expected behavior
- Full CPU โ๏ธ :
- `nlls: tensor([2.0129, 2.3220, 2.3500])`
- `Perplexity: 9.284077644348145`
- Full GPU โ๏ธ :
- `nlls: tensor([2.0137, 2.3223, 2.3496], device='cuda:0', dtype=torch.float16)`
- `Perplexity: 9.2890625`
- Split ๐ด :
- `nlls: tensor([15.6641, 15.9141, 16.5469], device='cuda:0', dtype=torch.float16)`
- `Perplexity: 9312564.0`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23368/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 4
}
|
https://api.github.com/repos/huggingface/transformers/issues/23368/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23367
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23367/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23367/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23367/events
|
https://github.com/huggingface/transformers/pull/23367
| 1,709,794,222 |
PR_kwDOCUB6oc5Qf3Dz
| 23,367 |
[Bugfix] `OPTDecoderLayer` does not return attentions when `gradient_checkpointing` and `training` is enabled.
|
{
"login": "gmlwns2000",
"id": 4879345,
"node_id": "MDQ6VXNlcjQ4NzkzNDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4879345?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gmlwns2000",
"html_url": "https://github.com/gmlwns2000",
"followers_url": "https://api.github.com/users/gmlwns2000/followers",
"following_url": "https://api.github.com/users/gmlwns2000/following{/other_user}",
"gists_url": "https://api.github.com/users/gmlwns2000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gmlwns2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gmlwns2000/subscriptions",
"organizations_url": "https://api.github.com/users/gmlwns2000/orgs",
"repos_url": "https://api.github.com/users/gmlwns2000/repos",
"events_url": "https://api.github.com/users/gmlwns2000/events{/privacy}",
"received_events_url": "https://api.github.com/users/gmlwns2000/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @younesbelkada @ArthurZucker "
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
Reorder argument of OPTDecoderLayer.forward
Fixes #23366
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23367/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23367/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23367",
"html_url": "https://github.com/huggingface/transformers/pull/23367",
"diff_url": "https://github.com/huggingface/transformers/pull/23367.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23367.patch",
"merged_at": 1684153913000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23366
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23366/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23366/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23366/events
|
https://github.com/huggingface/transformers/issues/23366
| 1,709,783,270 |
I_kwDOCUB6oc5l6Tjm
| 23,366 |
`OPTDecoderLayer` does not return attentions when `gradient_checkpointing` and `training` is enabled.
|
{
"login": "gmlwns2000",
"id": 4879345,
"node_id": "MDQ6VXNlcjQ4NzkzNDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4879345?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gmlwns2000",
"html_url": "https://github.com/gmlwns2000",
"followers_url": "https://api.github.com/users/gmlwns2000/followers",
"following_url": "https://api.github.com/users/gmlwns2000/following{/other_user}",
"gists_url": "https://api.github.com/users/gmlwns2000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gmlwns2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gmlwns2000/subscriptions",
"organizations_url": "https://api.github.com/users/gmlwns2000/orgs",
"repos_url": "https://api.github.com/users/gmlwns2000/repos",
"events_url": "https://api.github.com/users/gmlwns2000/events{/privacy}",
"received_events_url": "https://api.github.com/users/gmlwns2000/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# Bug Description
In `modeling_opt.py#704:710` [code](https://github.com/huggingface/transformers/blob/cf11493dce0a1d22446efe0d6c4ade02fd928e50/src/transformers/models/opt/modeling_opt.py#L704), `OPTDecoder` calls `OPTDecoderLayer.forward` with following argument order.
```py
if self.gradient_checkpointing and self.training:
def create_custom_forward(module):
def custom_forward(*inputs):
# None for past_key_value
return module(*inputs, output_attentions, None)
return custom_forward
layer_outputs = torch.utils.checkpoint.checkpoint(
create_custom_forward(decoder_layer),
hidden_states,
causal_attention_mask,
head_mask[idx] if head_mask is not None else None,
None,
)
else:
layer_outputs = decoder_layer(
hidden_states,
attention_mask=causal_attention_mask,
layer_head_mask=(head_mask[idx] if head_mask is not None else None),
past_key_value=past_key_value,
output_attentions=output_attentions,
use_cache=use_cache,
)
```
However, in `OPTDecoderLayer.forward` [code](https://github.com/huggingface/transformers/blob/cf11493dce0a1d22446efe0d6c4ade02fd928e50/src/transformers/models/opt/modeling_opt.py#L297), the order of argument is different with the previously showed function call argument order .
```py
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: Optional[torch.Tensor] = None,
layer_head_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = False, # **need to be reorder**
use_cache: Optional[bool] = False, # **need to be reorder**
past_key_value: Optional[Tuple[torch.Tensor]] = None, # **need to be reorder**
) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
```
Therefore, output_attentions of `OPTDecoderLayer.forward` always being `None`, because 4th argument in function call is always `None` [code](https://github.com/huggingface/transformers/blob/cf11493dce0a1d22446efe0d6c4ade02fd928e50/src/transformers/models/opt/modeling_opt.py#LL701C26-L701C26)
# Solution
Just change the order of declaration of `OPTDecoderLayer.forward` as following
```py
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: Optional[torch.Tensor] = None,
layer_head_mask: Optional[torch.Tensor] = None,
past_key_value: Optional[Tuple[torch.Tensor]] = None,
output_attentions: Optional[bool] = False,
use_cache: Optional[bool] = False,
) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
```
### System Information
- `transformers` version: 4.29.1
- Platform: Linux-5.15.0-58-generic-x86_64-with-glibc2.35
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.2.7
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes and No. Bug happens in both places.
- Using distributed or parallel set-up in script?: None
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```py
import transformers
from transformers.models.opt.modeling_opt import OPTDecoder
import torch
model = transformers.OPTForCausalLM.from_pretrained('facebook/opt-125m')
model.train()
for m in model.modules():
if isinstance(m, OPTDecoder):
m.gradient_checkpointing = True
m.config.use_cache = False
output = model(torch.zeros((1, 4), dtype=torch.int64), output_attentions=True)
assert type(output.attentions) == tuple
assert type(output.attentions[0]) == torch.Tensor, type(output.attentions[0])
```
The above test code should finish without error. However, the result is the following.
```
(torch) ainl@ainl-main-ubuntu:~/library/bug$ python -m opt_bug
Traceback (most recent call last):
File "/home/ainl/anaconda3/envs/torch/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/ainl/anaconda3/envs/torch/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/ainl/library/bug/opt_bug.py", line 13, in <module>
assert type(output.attentions[0]) == torch.Tensor, type(output.attentions[0])
AssertionError: <class 'tuple'>
```
Following is my environment setting.
```
(torch) ainl@ainl-main-ubuntu:~/library/bug$ pip show torch transformers
Name: torch
Version: 2.0.1
Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration
Home-page: https://pytorch.org/
Author: PyTorch Team
Author-email: [email protected]
License: BSD-3
Location: /home/ainl/anaconda3/envs/torch/lib/python3.9/site-packages
Requires: filelock, jinja2, networkx, nvidia-cublas-cu11, nvidia-cuda-cupti-cu11, nvidia-cuda-nvrtc-cu11, nvidia-cuda-runtime-cu11, nvidia-cudnn-cu11, nvidia-cufft-cu11, nvidia-curand-cu11, nvidia-cusolver-cu11, nvidia-cusparse-cu11, nvidia-nccl-cu11, nvidia-nvtx-cu11, sympy, triton, typing-extensions
Required-by: axial-positional-embedding, basicsr, deepspeed, facexlib, gfpgan, invisible-watermark, local-attention, onnx2torch, open-clip-torch, performer-pytorch, product-key-memory, pytorch-tabnet, realesrgan, sinkhorn-transformer, thop, timm, torch-tensorrt, torchaudio, torchdata, torchtext, torchvision, triton
---
Name: transformers
Version: 4.29.1
Summary: State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow
Home-page: https://github.com/huggingface/transformers
Author: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors)
Author-email: [email protected]
License: Apache 2.0 License
Location: /home/ainl/anaconda3/envs/torch/lib/python3.9/site-packages
Requires: filelock, huggingface-hub, numpy, packaging, pyyaml, regex, requests, tokenizers, tqdm
Required-by:
```
### Expected behavior
Finish the above test code without any errors.
# Call for Moderator (Text-models)
@ArthurZucker and @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23366/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23366/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23365
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23365/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23365/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23365/events
|
https://github.com/huggingface/transformers/pull/23365
| 1,709,763,813 |
PR_kwDOCUB6oc5Qfwg1
| 23,365 |
Fix some `is_xxx_available`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@ydshieh thanks for fixing this and sorry for having introduced these bugs in first place.",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
FYI, after #23163, `is_bs4_available()` and `is_faiss_available()` gives `False` even if they are actually available. This causes some CI errors, in particularly, `MarkupLM`.
This PR fixes the issue in a quick way. It's better to discuss if we want to enhance the function `_is_package_available` to be able to handle such edge cases.
(cc. @apbard FYI)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23365/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23365/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23365",
"html_url": "https://github.com/huggingface/transformers/pull/23365",
"diff_url": "https://github.com/huggingface/transformers/pull/23365.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23365.patch",
"merged_at": 1684152525000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23364
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23364/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23364/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23364/events
|
https://github.com/huggingface/transformers/pull/23364
| 1,709,657,500 |
PR_kwDOCUB6oc5QfZXG
| 23,364 |
Minor fixes in transformers-tools
|
{
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
Really just a few things as I dig a bit into the implementation of transformers-tools:
- `upload_folder` instead of `os.listdir` + `create_commit` (more robust against recursion)
- some typing
- use `metadata_update` with correct `repo_id` when pushing to Hub
- use `build_hf_headers` instead of `HfFolder` for token retrieval
- use `super().__init__()` and `super().setup()` in `PipelineTool` (otherwise the pipeline is setup again at each run)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23364/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23364/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23364",
"html_url": "https://github.com/huggingface/transformers/pull/23364",
"diff_url": "https://github.com/huggingface/transformers/pull/23364.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23364.patch",
"merged_at": 1684245344000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23363
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23363/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23363/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23363/events
|
https://github.com/huggingface/transformers/pull/23363
| 1,709,581,568 |
PR_kwDOCUB6oc5QfJDq
| 23,363 |
Fix issue introduced in PR #23163
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"FYI, after #23163, `is_bs4_available()` and `is_faiss_available()` gives `False` even if they are actually available. This causes some CI errors, in particularly, `MarkupLM`.\r\n\r\nI will fix this in a separate PR."
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
Fix issue introduced in PR #23163.
The previous `torch_version` is removed (which is not a string but a `version` type), and `get_torch_version()` is introduced and used (which is a string). In some places, it is compared against `self.torch_onnx_minimum_version` which is a string, and we know get on CI `TypeError: '<' not supported between instances of 'str' and 'Version'`.
This PR fixes this problem and avoid the > 1000 test failures.
(cc. @apbard FYI)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23363/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23363/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23363",
"html_url": "https://github.com/huggingface/transformers/pull/23363",
"diff_url": "https://github.com/huggingface/transformers/pull/23363.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23363.patch",
"merged_at": 1684143524000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23362
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23362/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23362/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23362/events
|
https://github.com/huggingface/transformers/pull/23362
| 1,709,526,240 |
PR_kwDOCUB6oc5Qe86M
| 23,362 |
[image-to-text pipeline] Add conditional text support + GIT
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@Narsil thanks for your review, feel free to approve.",
"Apologies, will take this into account."
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
The `ImageToText` pipeline can generate text given an image, but oftentimes one wants to make the model continue text given a prompt (like "a photo of"). This PR adds support for conditional text generation given an image.
It also adds support for GIT.
This PR fixes a part of #21110 and is based on #22423.
To do:
- [x] add support for Pix2Struct once design is approved
cc @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23362/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23362/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23362",
"html_url": "https://github.com/huggingface/transformers/pull/23362",
"diff_url": "https://github.com/huggingface/transformers/pull/23362.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23362.patch",
"merged_at": 1684784750000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23361
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23361/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23361/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23361/events
|
https://github.com/huggingface/transformers/pull/23361
| 1,709,501,925 |
PR_kwDOCUB6oc5Qe3zJ
| 23,361 |
[wip test doc-build]
|
{
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
testing https://github.com/huggingface/doc-builder/pull/372
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23361/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23361/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23361",
"html_url": "https://github.com/huggingface/transformers/pull/23361",
"diff_url": "https://github.com/huggingface/transformers/pull/23361.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23361.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23360
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23360/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23360/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23360/events
|
https://github.com/huggingface/transformers/pull/23360
| 1,709,262,385 |
PR_kwDOCUB6oc5QeErQ
| 23,360 |
Typo suggestion
|
{
"login": "richardachen",
"id": 85973297,
"node_id": "MDQ6VXNlcjg1OTczMjk3",
"avatar_url": "https://avatars.githubusercontent.com/u/85973297?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richardachen",
"html_url": "https://github.com/richardachen",
"followers_url": "https://api.github.com/users/richardachen/followers",
"following_url": "https://api.github.com/users/richardachen/following{/other_user}",
"gists_url": "https://api.github.com/users/richardachen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/richardachen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richardachen/subscriptions",
"organizations_url": "https://api.github.com/users/richardachen/orgs",
"repos_url": "https://api.github.com/users/richardachen/repos",
"events_url": "https://api.github.com/users/richardachen/events{/privacy}",
"received_events_url": "https://api.github.com/users/richardachen/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
Typo corrected in docs: "preprocessign" --> "preprocessing"
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23360/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23360/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23360",
"html_url": "https://github.com/huggingface/transformers/pull/23360",
"diff_url": "https://github.com/huggingface/transformers/pull/23360.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23360.patch",
"merged_at": 1684148656000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23359
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23359/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23359/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23359/events
|
https://github.com/huggingface/transformers/pull/23359
| 1,709,248,190 |
PR_kwDOCUB6oc5QeByl
| 23,359 |
Replace appends with list comprehension.
|
{
"login": "ttsugriy",
"id": 172294,
"node_id": "MDQ6VXNlcjE3MjI5NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/172294?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ttsugriy",
"html_url": "https://github.com/ttsugriy",
"followers_url": "https://api.github.com/users/ttsugriy/followers",
"following_url": "https://api.github.com/users/ttsugriy/following{/other_user}",
"gists_url": "https://api.github.com/users/ttsugriy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ttsugriy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ttsugriy/subscriptions",
"organizations_url": "https://api.github.com/users/ttsugriy/orgs",
"repos_url": "https://api.github.com/users/ttsugriy/repos",
"events_url": "https://api.github.com/users/ttsugriy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ttsugriy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
It's more idiomatic and significantly more efficient because 1) it avoids repeated `append` call that Python has to resolve on each iteration 2) can preallocate the size of the final list avoiding resizing
# What does this PR do?
This PR replaces uses list comprehension instead of list appends.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23359/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23359/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23359",
"html_url": "https://github.com/huggingface/transformers/pull/23359",
"diff_url": "https://github.com/huggingface/transformers/pull/23359.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23359.patch",
"merged_at": 1684264452000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23358
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23358/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23358/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23358/events
|
https://github.com/huggingface/transformers/issues/23358
| 1,709,084,999 |
I_kwDOCUB6oc5l3pFH
| 23,358 |
Phidas
|
{
"login": "phidass",
"id": 121753248,
"node_id": "U_kgDOB0HOoA",
"avatar_url": "https://avatars.githubusercontent.com/u/121753248?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phidass",
"html_url": "https://github.com/phidass",
"followers_url": "https://api.github.com/users/phidass/followers",
"following_url": "https://api.github.com/users/phidass/following{/other_user}",
"gists_url": "https://api.github.com/users/phidass/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phidass/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phidass/subscriptions",
"organizations_url": "https://api.github.com/users/phidass/orgs",
"repos_url": "https://api.github.com/users/phidass/repos",
"events_url": "https://api.github.com/users/phidass/events{/privacy}",
"received_events_url": "https://api.github.com/users/phidass/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,684 | 1,684 | 1,684 |
NONE
| null |
Mai multฤ inteligenศฤ artificialฤ!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23358/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23358/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23357
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23357/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23357/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23357/events
|
https://github.com/huggingface/transformers/issues/23357
| 1,708,989,942 |
I_kwDOCUB6oc5l3R32
| 23,357 |
Model isn't loaded with the right type with AutoModel with torch_dtype="auto"
|
{
"login": "eladsegal",
"id": 13485709,
"node_id": "MDQ6VXNlcjEzNDg1NzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eladsegal",
"html_url": "https://github.com/eladsegal",
"followers_url": "https://api.github.com/users/eladsegal/followers",
"following_url": "https://api.github.com/users/eladsegal/following{/other_user}",
"gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions",
"organizations_url": "https://api.github.com/users/eladsegal/orgs",
"repos_url": "https://api.github.com/users/eladsegal/repos",
"events_url": "https://api.github.com/users/eladsegal/events{/privacy}",
"received_events_url": "https://api.github.com/users/eladsegal/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @younesbelkada ",
"Hi @eladsegal ๐\r\n\r\n- The model was saved in `bfloat16` with `\"T5ForConditionalGeneration\"` architecture so the model was loaded in `bfloat16`\r\n- But in `AutoModel.from_pretrained ` method torch_dtype is set to `auto` and you can read the doc ( image i have uploaded) that dtype picked by `auto` is generally `float32`\r\n- Hope it helps , If i misunderstood your question ,Please ๐ give a feedback\r\n\r\n**You can check this --> [doc](https://huggingface.co/docs/transformers/v4.29.1/en/main_classes/configuration#transformers.PretrainedConfig.torch_dtype)**",
"Thank you for the report, @eladsegal - that's indeed a bug that I introduced while trying to fix another issue. \r\n\r\nPlease try this fix: https://github.com/huggingface/transformers/pull/23379",
"Thank you @stas00 for the quick fix! Works just as expected. ",
"Thank you for confirming that, @eladsegal!"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.30.0.dev0
### Who can help?
@stas00
(as the relevant change was made in https://github.com/huggingface/transformers/pull/21524)
### Reproduction
```python
from transformers import AutoModelForSeq2SeqLM, T5ForConditionalGeneration
auto_model = AutoModelForSeq2SeqLM.from_pretrained("ybelkada/flan-t5-xl-sharded-bf16", torch_dtype="auto")
print(auto_model.dtype) # torch.float32
t5_model = T5ForConditionalGeneration.from_pretrained("ybelkada/flan-t5-xl-sharded-bf16", torch_dtype="auto")
print(t5_model.dtype) # torch.bfloat16
```
### Expected behavior
`AutoModelForSeq2SeqLM` should also load the model in `torch.bfloat16`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23357/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23357/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23356
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23356/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23356/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23356/events
|
https://github.com/huggingface/transformers/pull/23356
| 1,708,947,631 |
PR_kwDOCUB6oc5QdF4V
| 23,356 |
Replace NumPy Operations with JAX NumPy Equivalents for JIT Compilation Compatibility
|
{
"login": "gojiteji",
"id": 38291975,
"node_id": "MDQ6VXNlcjM4MjkxOTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/38291975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gojiteji",
"html_url": "https://github.com/gojiteji",
"followers_url": "https://api.github.com/users/gojiteji/followers",
"following_url": "https://api.github.com/users/gojiteji/following{/other_user}",
"gists_url": "https://api.github.com/users/gojiteji/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gojiteji/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gojiteji/subscriptions",
"organizations_url": "https://api.github.com/users/gojiteji/orgs",
"repos_url": "https://api.github.com/users/gojiteji/repos",
"events_url": "https://api.github.com/users/gojiteji/events{/privacy}",
"received_events_url": "https://api.github.com/users/gojiteji/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"doneโ
https://github.com/huggingface/transformers/pull/23356/commits/175ab5dd0b8276f70b69ab21ddb4356aa353d611",
"@gojiteji To resolve the failing quality tests, you'll need to run `make fix-copies` and `make style` and push the changes. It seems the MT5 also doesn't have `jnp` defined in the modeling file. "
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR modifies the Transformers library to replace NumPy operations with their JAX NumPy equivalents. The main change is the use of JAX's immutable update methods as substitutes for in-place assignments.
Using NumPy methods instead of JAX NumPy results in an error during jit compilation (jit).
```
transformers/models/mbart/modeling_flax_mbart.py", line 226, in shift_tokens_right
prev_output_tokens = np.array(input_ids).copy()
jax._src.errors.TracerArrayConversionError: The numpy.ndarray conversion method __array__() was called on the JAX Tracer object Traced<ShapedArray(int32[4,260])>with<DynamicJaxprTrace(level=0/1)>
```
Here's a brief summary of the changes:
previous:
```python
prev_output_tokens = np.array(input_ids).copy()
if pad_token_id is None:
raise ValueError("self.model.config.pad_token_id has to be defined.")
# replace possible -100 values in labels by `pad_token_id`
prev_output_tokens = np.where(prev_output_tokens == -100, pad_token_id, input_ids)
index_of_eos = (np.where(prev_output_tokens != pad_token_id, 1, 0).sum(axis=-1) - 1).reshape(-1, 1)
decoder_start_tokens = np.array(
[prev_output_tokens[i, eos_idx] for i, eos_idx in enumerate(index_of_eos)], dtype=np.int32
).squeeze()
prev_output_tokens[:, 1:] = prev_output_tokens[:, :-1].copy()
prev_output_tokens[:, 0] = decoder_start_tokens
```
modefied:
```python
prev_output_tokens = jnp.array(input_ids).copy()
if pad_token_id is None:
raise ValueError("self.model.config.pad_token_id has to be defined.")
# replace possible -100 values in labels by `pad_token_id`
prev_output_tokens = jnp.where(prev_output_tokens == -100, pad_token_id, input_ids)
index_of_eos = (jnp.where(prev_output_tokens != pad_token_id, 1, 0).sum(axis=-1) - 1).reshape(-1, 1)
decoder_start_tokens = jnp.array(
[prev_output_tokens[i, eos_idx] for i, eos_idx in enumerate(index_of_eos)], dtype=jnp.int32
).squeeze()
prev_output_tokens = prev_output_tokens.at[:, 1:].set(prev_output_tokens[:, :-1])
prev_output_tokens = prev_output_tokens.at[:, 0].set(decoder_start_tokens)
```
- @sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23356/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23356/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23356",
"html_url": "https://github.com/huggingface/transformers/pull/23356",
"diff_url": "https://github.com/huggingface/transformers/pull/23356.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23356.patch",
"merged_at": 1684230860000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23355
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23355/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23355/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23355/events
|
https://github.com/huggingface/transformers/pull/23355
| 1,708,904,969 |
PR_kwDOCUB6oc5Qc9bX
| 23,355 |
Added support for AzureOpenAiAgent in tools
|
{
"login": "waundme",
"id": 32538753,
"node_id": "MDQ6VXNlcjMyNTM4NzUz",
"avatar_url": "https://avatars.githubusercontent.com/u/32538753?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/waundme",
"html_url": "https://github.com/waundme",
"followers_url": "https://api.github.com/users/waundme/followers",
"following_url": "https://api.github.com/users/waundme/following{/other_user}",
"gists_url": "https://api.github.com/users/waundme/gists{/gist_id}",
"starred_url": "https://api.github.com/users/waundme/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/waundme/subscriptions",
"organizations_url": "https://api.github.com/users/waundme/orgs",
"repos_url": "https://api.github.com/users/waundme/repos",
"events_url": "https://api.github.com/users/waundme/events{/privacy}",
"received_events_url": "https://api.github.com/users/waundme/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23355). All of your documentation changes will be reflected on that endpoint.",
"Awesome that you are contributing.\r\nIn my opinion, rather than having a separate AzureOpenAiAgent adding the options and API calls in the OpenAiAgent would be preferrable, so it is only configuration of parameters to switch to Azure. Not my call, but my preference. ",
"cc @sgugger ",
"Ah actually I see two arguments are renamed. Maybe have this be a subclass of `OpenAiAgent` to avoid rewriting every method and just rewrites `_completion_generate` and `_chat_generate`?",
"DeploymentId is named arbitrarily and does not let you directly derive the model type from it unless you do additional requests to look it up.\nThe underlying Python OpenAI SDK has a way of differentiating between Azure OpenAI and OpenAI's own deployment. In my opinion it would make sense to align the API style and expose what the API exposes in a similar fashion. ",
"> The underlying Python OpenAI SDK has a way of differentiating between Azure OpenAI and OpenAI's own deployment. In my opinion it would make sense to align the API style and expose what the API exposes in a similar fashion.\r\n\r\nIf it's easily doable, then yes let's aim for that!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,687 | 1,687 |
NONE
| null |
# What does this PR do?
Implements a new class AzureOpenAiAgent derived from Agent in transformer agents.
Fixes #23324
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23355/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23355/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23355",
"html_url": "https://github.com/huggingface/transformers/pull/23355",
"diff_url": "https://github.com/huggingface/transformers/pull/23355.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23355.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23354
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23354/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23354/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23354/events
|
https://github.com/huggingface/transformers/issues/23354
| 1,708,804,231 |
I_kwDOCUB6oc5l2kiH
| 23,354 |
Make it easy to get seperate "prints" for individual runs/ users when using Transformers Agent
|
{
"login": "MarcSkovMadsen",
"id": 42288570,
"node_id": "MDQ6VXNlcjQyMjg4NTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/42288570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MarcSkovMadsen",
"html_url": "https://github.com/MarcSkovMadsen",
"followers_url": "https://api.github.com/users/MarcSkovMadsen/followers",
"following_url": "https://api.github.com/users/MarcSkovMadsen/following{/other_user}",
"gists_url": "https://api.github.com/users/MarcSkovMadsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MarcSkovMadsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MarcSkovMadsen/subscriptions",
"organizations_url": "https://api.github.com/users/MarcSkovMadsen/orgs",
"repos_url": "https://api.github.com/users/MarcSkovMadsen/repos",
"events_url": "https://api.github.com/users/MarcSkovMadsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/MarcSkovMadsen/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] |
open
| false | null |
[] |
[
"The solution for me is probably to inspect the `run` function and then compose the pieces in a way that works better for my app.\r\n\r\n",
"cc @sgugger @LysandreJik ",
"Would the PR mentioned above fix your problem?"
] | 1,684 | 1,684 | null |
NONE
| null |
### Feature request
I have started exploring the new Transformers Agent. And I would like to build a UI to help me speed up the process.
I might be running multiple runs in parallel or have multiple users using my application. I would like to be able to stream the information from the run as it arrives. I would like to store the information in a database containing all the runs I've done.
Currently all the valuable information about the run is printed I.e. you are using print to inform me like below
```bash
==Explanation from the agent==
I will use the following tool: `image_generator` to generate an image.
==Code generated by the agent==
image = image_generator(prompt="rivers and lakes")
==Result==
<PIL.PngImagePlugin.PngImageFile image mode=RGB size=512x512 at 0x7F8DDC11C4C0>
```
This is for example done in `agenst.py`

Using `print` makes it hard for me to distinguish between multiple runs/ users. Especially if run in parallel.
Please provide a simple to use method to stream each run individually. It could be as simple as adding a `print` (or `write`) argument to the `Agent.run`, `HFAgent.run` and `OpenAI.run` method.
Alternatively some `run_id` argument could be provided and printed as well. Then I can split the stream that comes in by `run_id`. This is less preferred though that this also adds some complexity.
### Motivation
This will make it much, much easier to create interesting AI apps.
### Your contribution
I might do it ๐ . But I hope someone with knowledge of the code base would do it.
### Additional Context
An async `.run_async` function would also be much appreciated as my UI is built on top of Tornado. This will help me keep the app responsive.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23354/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/23353
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23353/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23353/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23353/events
|
https://github.com/huggingface/transformers/pull/23353
| 1,708,756,233 |
PR_kwDOCUB6oc5QchLn
| 23,353 |
Add support for SciBART by UCLANLP
|
{
"login": "xiaowu0162",
"id": 43978113,
"node_id": "MDQ6VXNlcjQzOTc4MTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/43978113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiaowu0162",
"html_url": "https://github.com/xiaowu0162",
"followers_url": "https://api.github.com/users/xiaowu0162/followers",
"following_url": "https://api.github.com/users/xiaowu0162/following{/other_user}",
"gists_url": "https://api.github.com/users/xiaowu0162/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiaowu0162/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiaowu0162/subscriptions",
"organizations_url": "https://api.github.com/users/xiaowu0162/orgs",
"repos_url": "https://api.github.com/users/xiaowu0162/repos",
"events_url": "https://api.github.com/users/xiaowu0162/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiaowu0162/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23353). All of your documentation changes will be reflected on that endpoint.",
"Hi @younesbelkada, this pr is ready for review.",
"Thank you for the comments! I have fixed the mentioned issues and they are ready for review. @amyeroberts @younesbelkada ",
"> Could you explain the main difference that are added in this tokenizer? I am not sure we have to add a new class for it, that's why I am asking\r\n\r\nYes. We are adding a new model pre-trained from scratch on the science corpus. This model has exactly the same architecture as BART but with a different vocabulary. The new tokenizer class is needed because our tokenizer is trained with sentencepiece, while the facebook BART model does not use sentencepiece. Does this solve your concerns? @ArthurZucker ",
"Sure, but we have bunch of already implemented tokenizers that rely on `spm`, with slow to fast converters. Look at the `BarthezTokenizer` for example, code seems kinda duplicate. `XGLM` looks also very similar, same for `XLM_Roberta`. ",
"> Sure, but we have bunch of already implemented tokenizers that rely on `spm`, with slow to fast converters. Look at the `BarthezTokenizer` for example, code seems kinda duplicate. `XGLM` looks also very similar, same for `XLM_Roberta`.\r\n\r\n@ArthurZucker Thank you for the reply. `BarthezTokenizer` is indeed similar (so is `XGLMTokenizer`). However, I believe it has some different assumptions and cannot be directly used by SciBART. For example, the default tokens 0, 1, 2, 3 are different (https://github.com/huggingface/transformers/blob/b7b729b38d12309185bcc9fdf8b55418a1ad2421/src/transformers/models/barthez/tokenization_barthez.py#L160). Letting SciBartTokenizer inherit from it also does not seem to make sense because it breaks the modularity. \r\n\r\nWhat are the actionable items here? Our priority is to enable users to use the SciBART model.",
"Sorry for being noisy ๐
If you are going to make the model available on the hub, and the only differences are the default tokens, I would simply recommend you to have the tokenizer on the hub. We can't accept a new model which only has this one line that differs. An other solution is to open a PR to allow this default `fairseq_token_ids` to be an argument of the init, which would allow you to store it in the tokenizer config to easily use the BarthezTokenizer! \r\nAnother way is to hold your code on the hub! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,688 | 1,688 |
NONE
| null |
# What does this PR do?
Add the support for the SciBART model (https://arxiv.org/abs/2212.10233). This is a BART model trained from scratch on the S2ORC corpus. Its tokenizer is a sentencepiece tokenizer trained from scratch. This PR supports using the newly trained tokenizer. The model checkpoints are already uploaded to https://huggingface.co/uclanlp/scibart-base and https://huggingface.co/uclanlp/scibart-large.
Implementation-wise, this PR refers to #1839.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker and @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23353/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23353/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23353",
"html_url": "https://github.com/huggingface/transformers/pull/23353",
"diff_url": "https://github.com/huggingface/transformers/pull/23353.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23353.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23352
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23352/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23352/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23352/events
|
https://github.com/huggingface/transformers/issues/23352
| 1,708,714,581 |
I_kwDOCUB6oc5l2OpV
| 23,352 |
Transformers can not load dependency of tensorflow - No module named 'keras.engine'
|
{
"login": "dcdieci",
"id": 634589,
"node_id": "MDQ6VXNlcjYzNDU4OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/634589?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dcdieci",
"html_url": "https://github.com/dcdieci",
"followers_url": "https://api.github.com/users/dcdieci/followers",
"following_url": "https://api.github.com/users/dcdieci/following{/other_user}",
"gists_url": "https://api.github.com/users/dcdieci/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dcdieci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dcdieci/subscriptions",
"organizations_url": "https://api.github.com/users/dcdieci/orgs",
"repos_url": "https://api.github.com/users/dcdieci/repos",
"events_url": "https://api.github.com/users/dcdieci/events{/privacy}",
"received_events_url": "https://api.github.com/users/dcdieci/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @dcdieci, this issue is the result of some namespace moves inside TensorFlow which occurred because Keras was partly decoupled from TensorFlow and moved to its own repository. If you look at [our codebase](https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_tf_utils.py#L68-L77), you can see that we import these functions from `keras` for TF versions >= 2.11 and from `tensorflow.python.keras` below this. It seems like in your case you're using a newer version of TensorFlow, but you're missing a modern version of the standalone Keras library (which should be installed as a dependency of TensorFlow). Can you try `pip install --upgrade keras` to see if that resolves the issue?",
"@Rocketknight1 many thanks for getting back to me that fast (considering the amount of issues, unbelievable).\r\nI changed to pytorch and everything is working fine. I am very new to the AI space and I was wondering if it does make sense to use tensorflow then at all, since pytorch is running?\r\n",
"Both PyTorch and TensorFlow do basically the same thing (they're a framework for linear algebra + acceleration on GPU and TPU) - we support both, but you only need one! If PyTorch is easier to get working on your system then it's totally fine to just use it instead.",
"I ran into this too. I can't speak to the specific issue, but it's related to the latest pre-release of tensorflow (https://github.com/tensorflow/tensorflow/releases/tag/v2.13.0-rc0), which is installed when you do a `pip install tensorflow` (odd that a pre-release get's installed, but that's another story). There is some keras related breaking changes in that release. In any case, I was able to get around this by building+installing tensorflow 2.12.0 from source.",
"Thanks for the heads-up - we'll do some testing with TF 2.13 before it releases!",
"There are upcoming breaking changes to keras. Please see\r\nhttps://github.com/keras-team/tf-keras/issues/196\r\nAlso see the release notes here https://github.com/tensorflow/tensorflow/releases/tag/v2.13.0-rc0 particularly the part in 'Breaking Changes' that talks about restricting access so that only public symbols are accessible.\r\nThis will need updates to transformers to resolve I think.",
"I've opened a PR at #23663 that should cover this issue as well as future-proof against other changes. My limited testing with 2.13rc0 on my local machine looked good, but if you get the chance please try it out with `pip install --upgrade git+https://github.com/huggingface/transformers.git@tf_future_proofing`\r\n\r\ncc @dcdieci @elfringham @sanderpick",
"This has now been merged - if anyone else is having compatibility issues with `transformers` and TensorFlow 2.13 and finds this issue, please install transformers from `main` with `pip install --upgrade git+https://github.com/huggingface/transformers.git`. Once we release 2.30 (probably end of May / early June) you can go back to just `pip install --upgrade transformers`.\r\n\r\nIf anyone is still encountering this problem after installing the latest version, please reply or reopen this issue and let us know!",
"Hello! I think I'm still running into this issue. \r\n* python 3.8.16\r\n* tensorflow 2.13.0rc1\r\n* transformers from main\r\n\r\nI'm on an M2 rather than an M1, and maybe I should try downgrading TF to 2.11? _Edit: After looking into this, I don't think I can actually downgrade, so I'm stuck on the current tensorflow. I was able to get code running with pytorch though!_",
"Hi! This regression was caused by a PR we merged yesterday and should be fixed as of about an hour ago. Please install the latest version from `main` and try again. Thanks again to @frostming for spotting that one so quickly!",
"We will also be making a proper release of version 4.30 later this week that should correctly support TF 2.13, so hopefully after that everyone can just `pip install --upgrade transformers` and stop installing from `main`.",
"Thank you so much! ",
"After re-running the installation (from commit 12298cb65c7e9d615b749dde935a0b4966f4ae49) it still fails on my end, but github also seems to be having problems, so maybe that commit is behind.",
"@coolhannes Can you paste the error message you're getting?",
"Yep! Sorry. (Also you may see some references to pytorch, I switched to that in the meantime but this is the error I get from TF).\r\n\r\nRunning\r\n```\r\nfrom transformers import TFAutoModelForSequenceClassification, AutoTokenizer\r\nmodel_name = \"distilbert-base-uncased-finetuned-sst-2-english\"\r\nmodel = TFAutoModelForSequenceClassification.from_pretrained(model_name)\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\n```\r\n\r\nand getting: \r\n\r\n```\r\n---------------------------------------------------------------------------\r\nModuleNotFoundError Traceback (most recent call last)\r\nFile ~/.pyenv/versions/3.8.16/envs/opt-outs/lib/python3.8/site-packages/transformers/utils/import_utils.py:1084, in _LazyModule._get_module(self, module_name)\r\n 1083 try:\r\n-> 1084 return importlib.import_module(\".\" + module_name, self.__name__)\r\n 1085 except Exception as e:\r\n\r\nFile ~/.pyenv/versions/3.8.16/lib/python3.8/importlib/__init__.py:127, in import_module(name, package)\r\n 126 level += 1\r\n--> 127 return _bootstrap._gcd_import(name[level:], package, level)\r\n\r\nFile <frozen importlib._bootstrap>:1014, in _gcd_import(name, package, level)\r\n\r\nFile <frozen importlib._bootstrap>:991, in _find_and_load(name, import_)\r\n\r\nFile <frozen importlib._bootstrap>:975, in _find_and_load_unlocked(name, import_)\r\n\r\nFile <frozen importlib._bootstrap>:671, in _load_unlocked(spec)\r\n\r\nFile <frozen importlib._bootstrap_external>:843, in exec_module(self, module)\r\n\r\nFile <frozen importlib._bootstrap>:219, in _call_with_frames_removed(f, *args, **kwds)\r\n\r\nFile ~/.pyenv/versions/3.8.16/envs/opt-outs/lib/python3.8/site-packages/transformers/models/distilbert/modeling_tf_distilbert.py:37\r\n 29 from ...modeling_tf_outputs import (\r\n 30 TFBaseModelOutput,\r\n 31 TFMaskedLMOutput,\r\n (...)\r\n 35 TFTokenClassifierOutput,\r\n 36 )\r\n---> 37 from ...modeling_tf_utils import (\r\n 38 TFMaskedLanguageModelingLoss,\r\n 39 TFModelInputType,\r\n 40 TFMultipleChoiceLoss,\r\n 41 TFPreTrainedModel,\r\n 42 TFQuestionAnsweringLoss,\r\n 43 TFSequenceClassificationLoss,\r\n 44 TFTokenClassificationLoss,\r\n 45 get_initializer,\r\n 46 keras_serializable,\r\n 47 unpack_inputs,\r\n 48 )\r\n 49 from ...tf_utils import check_embeddings_within_bounds, shape_list, stable_softmax\r\n\r\nFile ~/.pyenv/versions/3.8.16/envs/opt-outs/lib/python3.8/site-packages/transformers/modeling_tf_utils.py:77\r\n 76 from keras.__internal__ import KerasTensor\r\n---> 77 from keras.engine.base_layer_utils import call_context\r\n 78 elif parse(tf.__version__).minor >= 11:\r\n\r\nModuleNotFoundError: No module named 'keras.engine'\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nRuntimeError Traceback (most recent call last)\r\nCell In[4], line 7\r\n 4 from torch.nn.parallel import DataParallel\r\n 6 model_name = \"distilbert-base-uncased-finetuned-sst-2-english\"\r\n----> 7 model = TFAutoModelForSequenceClassification.from_pretrained(model_name)\r\n 8 tokenizer = AutoTokenizer.from_pretrained(model_name)\r\n\r\nFile ~/.pyenv/versions/3.8.16/envs/opt-outs/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py:483, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 479 return model_class.from_pretrained(\r\n 480 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs\r\n 481 )\r\n 482 elif type(config) in cls._model_mapping.keys():\r\n--> 483 model_class = _get_model_class(config, cls._model_mapping)\r\n 484 return model_class.from_pretrained(\r\n 485 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs\r\n 486 )\r\n 487 raise ValueError(\r\n 488 f\"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\\n\"\r\n 489 f\"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}.\"\r\n 490 )\r\n\r\nFile ~/.pyenv/versions/3.8.16/envs/opt-outs/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py:375, in _get_model_class(config, model_mapping)\r\n 374 def _get_model_class(config, model_mapping):\r\n--> 375 supported_models = model_mapping[type(config)]\r\n 376 if not isinstance(supported_models, (list, tuple)):\r\n 377 return supported_models\r\n\r\nFile ~/.pyenv/versions/3.8.16/envs/opt-outs/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py:657, in _LazyAutoMapping.__getitem__(self, key)\r\n 655 if model_type in self._model_mapping:\r\n 656 model_name = self._model_mapping[model_type]\r\n--> 657 return self._load_attr_from_module(model_type, model_name)\r\n 659 # Maybe there was several model types associated with this config.\r\n 660 model_types = [k for k, v in self._config_mapping.items() if v == key.__name__]\r\n\r\nFile ~/.pyenv/versions/3.8.16/envs/opt-outs/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py:671, in _LazyAutoMapping._load_attr_from_module(self, model_type, attr)\r\n 669 if module_name not in self._modules:\r\n 670 self._modules[module_name] = importlib.import_module(f\".{module_name}\", \"transformers.models\")\r\n--> 671 return getattribute_from_module(self._modules[module_name], attr)\r\n\r\nFile ~/.pyenv/versions/3.8.16/envs/opt-outs/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py:616, in getattribute_from_module(module, attr)\r\n 614 if isinstance(attr, tuple):\r\n 615 return tuple(getattribute_from_module(module, a) for a in attr)\r\n--> 616 if hasattr(module, attr):\r\n 617 return getattr(module, attr)\r\n 618 # Some of the mappings have entries model_type -> object of another model type. In that case we try to grab the\r\n 619 # object at the top level.\r\n\r\nFile ~/.pyenv/versions/3.8.16/envs/opt-outs/lib/python3.8/site-packages/transformers/utils/import_utils.py:1074, in _LazyModule.__getattr__(self, name)\r\n 1072 value = self._get_module(name)\r\n 1073 elif name in self._class_to_module.keys():\r\n-> 1074 module = self._get_module(self._class_to_module[name])\r\n 1075 value = getattr(module, name)\r\n 1076 else:\r\n\r\nFile ~/.pyenv/versions/3.8.16/envs/opt-outs/lib/python3.8/site-packages/transformers/utils/import_utils.py:1086, in _LazyModule._get_module(self, module_name)\r\n 1084 return importlib.import_module(\".\" + module_name, self.__name__)\r\n 1085 except Exception as e:\r\n-> 1086 raise RuntimeError(\r\n 1087 f\"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its\"\r\n 1088 f\" traceback):\\n{e}\"\r\n 1089 ) from e\r\n\r\nRuntimeError: Failed to import transformers.models.distilbert.modeling_tf_distilbert because of the following error (look up to see its traceback):\r\nNo module named 'keras.engine'\r\n```",
"Hi @coolhannes, that error refers to code from an older commit - it looks like you might not have upgraded to the most recent version on `main`! Try `pip install --upgrade https://github.com/huggingface/transformers.git` and the error should go away.",
"Oh my god, I think I was installing this instead of uninstalling/upgrading -- but this is finally working, thank you @Rocketknight1!",
"Hi everyone, I resolved these issues using the following package versions (including the full list for anyone also using these additional packages, but tensorflow, keras, datasets, transformers are most important):\r\n\r\n```\r\nawscli==1.29.41\r\nboto3==1.28.41\r\nbotocore==1.31.41\r\ndatasets==2.10.1\r\ndocker==5.0.3 \r\nFlask==2.0.3\r\ngunicorn==20.1.0\r\nh5py==3.7.0\r\nkeras==2.12.0\r\nnumpy==1.23.5\r\nprotobuf==4.23.4\r\npandas==1.4.4\r\nprobablepeople==0.5.4\r\npsutil==5.8.0\r\npsycopg2-binary==2.8.6\r\ns3fs==0.4.2\r\nsagemaker==2.183.0\r\nscikit-learn==1.1\r\nscipy==1.8.1\r\nshap==0.39.0\r\ntensorflow==2.12.0\r\ntransformers==4.27.3\r\nblack[jupyter]==21.12b0\r\nrequests==2.28.1\r\nnest-asyncio==1.5.5\r\nipykernel==6.14\r\n```",
"> Hi everyone, I resolved these issues using the following package versions (including the full list for anyone also using these additional packages, but tensorflow, keras, datasets, transformers are most important):\r\n> \r\n> ```\r\n> awscli==1.29.41\r\n> boto3==1.28.41\r\n> botocore==1.31.41\r\n> datasets==2.10.1\r\n> docker==5.0.3 \r\n> Flask==2.0.3\r\n> gunicorn==20.1.0\r\n> h5py==3.7.0\r\n> keras==2.12.0\r\n> numpy==1.23.5\r\n> protobuf==4.23.4\r\n> pandas==1.4.4\r\n> probablepeople==0.5.4\r\n> psutil==5.8.0\r\n> psycopg2-binary==2.8.6\r\n> s3fs==0.4.2\r\n> sagemaker==2.183.0\r\n> scikit-learn==1.1\r\n> scipy==1.8.1\r\n> shap==0.39.0\r\n> tensorflow==2.12.0\r\n> transformers==4.27.3\r\n> black[jupyter]==21.12b0\r\n> requests==2.28.1\r\n> nest-asyncio==1.5.5\r\n> ipykernel==6.14\r\n> ```\r\n\r\nthe key point is tensorflow==2.12.0",
"Hey all - if you update transformers to the latest version, it should work fine with the latest versions of TF and Keras as well - you shouldn't need to pin those!",
"transformers==4.34.1\r\ntensorflow==2.14.0\r\n\r\nHi, I have upgraded my transformers and tensorflow but I'm still facing the error. :/ any kind suggestions on how to proceed?\r\n\r\n```\r\nRuntimeError: Failed to import transformers.models.gpt2.modeling_tf_gpt2 because of the following error (look up to see its traceback):\r\nNo module named 'keras.engine'\r\n```\r\n",
"@yuseow It works fine for me with those versions. You say you upgraded transformers and tensorflow and that makes me wonder if maybe some dependent package was not upgraded at the same time or there is some other problem with the install. Please try a fresh virtualenv, upgrade pip and setuptools, then install tensorflow and transformers and see how that works.\r\n```\r\n$ python -m pip list -v\r\nPackage Version Location Installer\r\n---------------------------- --------- -------------------------------------------------- ---------\r\nabsl-py 2.0.0 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\nastunparse 1.6.3 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\ncachetools 5.3.1 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\ncertifi 2023.7.22 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\ncharset-normalizer 3.3.1 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\nfilelock 3.12.4 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\nflatbuffers 23.5.26 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\nfsspec 2023.10.0 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\ngast 0.5.4 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\ngoogle-auth 2.23.3 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\ngoogle-auth-oauthlib 1.0.0 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\ngoogle-pasta 0.2.0 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\ngrpcio 1.59.0 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\nh5py 3.10.0 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\nhuggingface-hub 0.17.3 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\nidna 3.4 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\nimportlib-metadata 6.8.0 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\nkeras 2.14.0 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\nlibclang 16.0.6 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\nMarkdown 3.5 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\nMarkupSafe 2.1.3 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\nml-dtypes 0.2.0 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\nnumpy 1.26.1 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\noauthlib 3.2.2 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\nopt-einsum 3.3.0 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\npackaging 23.2 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\npip 23.3.1 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\nprotobuf 4.24.4 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\npyasn1 0.5.0 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\npyasn1-modules 0.3.0 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\nPyYAML 6.0.1 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\nregex 2023.10.3 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\nrequests 2.31.0 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\nrequests-oauthlib 1.3.1 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\nrsa 4.9 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\nsafetensors 0.4.0 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\nsetuptools 68.2.2 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\nsix 1.16.0 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\ntensorboard 2.14.1 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\ntensorboard-data-server 0.7.2 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\ntensorflow 2.14.0 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\ntensorflow-cpu-aws 2.14.0 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\ntensorflow-estimator 2.14.0 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\ntensorflow-io-gcs-filesystem 0.34.0 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\ntermcolor 2.3.0 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\ntokenizers 0.14.1 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\ntqdm 4.66.1 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\ntransformers 4.34.1 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\ntyping_extensions 4.8.0 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\nurllib3 2.0.7 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\nWerkzeug 3.0.0 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\nwheel 0.41.2 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\nwrapt 1.14.1 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\nzipp 3.17.0 /home/andrew/test_venv/lib/python3.9/site-packages pip\r\n```"
] | 1,684 | 1,698 | 1,684 |
NONE
| null |
### System Info
osX silicon M1
- python 3.8.16 (also tested with newer versions e.g. 3.9)
- tensorflow 2.11.0 eigen_py39h384437f_0
(also tested with tensforflow 2.13 rc0)
tried conda and venv.
- transformers 4.28.1
also tested 4.29.1
```python
#sample code which causes the error below
from transformers import pipeline
summarizer = pipeline("summarization")
```
```
No model was supplied, defaulted to t5-small and revision d769bba (https://huggingface.co/t5-small).
Using a pipeline without specifying a model name and revision in production is not recommended.
Traceback (most recent call last):
File "/opt/homebrew/Caskroom/miniconda/base/envs/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1146, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/opt/homebrew/Caskroom/miniconda/base/envs/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/opt/homebrew/Caskroom/miniconda/base/envs/lib/python3.9/site-packages/transformers/models/t5/modeling_tf_t5.py", line 35, in <module>
from ...modeling_tf_utils import (
File "/opt/homebrew/Caskroom/miniconda/base/envs/lib/python3.9/site-packages/transformers/modeling_tf_utils.py", line 69, in <module>
from keras.engine import data_adapter
ModuleNotFoundError: No module named 'keras.engine'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "sumary_service.py", line 3, in <module>
summarizer = pipeline("summarization")
File "/opt/homebrew/Caskroom/miniconda/base/envs/lib/python3.9/site-packages/transformers/pipelines/__init__.py", line 779, in pipeline
framework, model = infer_framework_load_model(
File "/opt/homebrew/Caskroom/miniconda/base/envs/lib/python3.9/site-packages/transformers/pipelines/base.py", line 238, in infer_framework_load_model
_class = getattr(transformers_module, f"TF{architecture}", None)
File "/opt/homebrew/Caskroom/miniconda/base/envs/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1137, in __getattr__
value = getattr(module, name)
File "/opt/homebrew/Caskroom/miniconda/base/envs/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1136, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/opt/homebrew/Caskroom/miniconda/base/envs/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1148, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.models.t5.modeling_tf_t5 because of the following error (look up to see its traceback):
No module named 'keras.engine'
```
### Who can help?
@gante and @Rocketknight1
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Reproduction
- follow install example from official docs (https://huggingface.co/docs/transformers/installation)
- run sample code from model page https://huggingface.co/facebook/bart-large-cnn
`**error does not occur when using pytorch**`
### Expected behavior
transformer library does not raise exception
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23352/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23352/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23351
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23351/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23351/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23351/events
|
https://github.com/huggingface/transformers/issues/23351
| 1,708,684,829 |
I_kwDOCUB6oc5l2HYd
| 23,351 |
Document what layerdrop does
|
{
"login": "RobertBaruch",
"id": 1783950,
"node_id": "MDQ6VXNlcjE3ODM5NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1783950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RobertBaruch",
"html_url": "https://github.com/RobertBaruch",
"followers_url": "https://api.github.com/users/RobertBaruch/followers",
"following_url": "https://api.github.com/users/RobertBaruch/following{/other_user}",
"gists_url": "https://api.github.com/users/RobertBaruch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RobertBaruch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RobertBaruch/subscriptions",
"organizations_url": "https://api.github.com/users/RobertBaruch/orgs",
"repos_url": "https://api.github.com/users/RobertBaruch/repos",
"events_url": "https://api.github.com/users/RobertBaruch/events{/privacy}",
"received_events_url": "https://api.github.com/users/RobertBaruch/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @sanchit-gandhi ",
"Hey @RobertBaruch - good catch! It's indeed missing from the Wav2Vec2 config docstring. Would you like to open a PR to add this info? Easiest would be to copy the details from one of the existing configs where the info is present, e.g. OPT:\r\nhttps://github.com/huggingface/transformers/blob/130e15429116689c9d747be2cdd8c4be7bb7e2bd/src/transformers/models/opt/configuration_opt.py#L70",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Closed via https://github.com/huggingface/transformers/pull/23691"
] | 1,683 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.28.1
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.11.2
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
In `transformers/models/wav2vec2/configuration_wav2vec2.py` there is a parameter `layerdrop` in `__init__` which is not documented. This parameter is set (and overrides the default `0.1`) in the examples at `examples/pytorch/speech-recognition/README.md`, so it seems to be important.
### Expected behavior
Document `layerdrop`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23351/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23351/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23350
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23350/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23350/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23350/events
|
https://github.com/huggingface/transformers/issues/23350
| 1,708,680,554 |
I_kwDOCUB6oc5l2GVq
| 23,350 |
Encoder-Decoder: OTP as a decoder
|
{
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey @BramVanroy ๐ \r\n\r\nI believe most, if not all, recent decoder-only models are not compatible with `EncoderDecoderModel`, as they are missing a [block like this one in GPT2](https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt2/modeling_gpt2.py#L404) (plus related changes, like making `encoder_hidden_states` an argument).",
"Thanks for the reply @gante! I indeed had found this difference in the code. Do you know whether there are any plans to make more decoders compatible? ",
"@BramVanroy not on our end, since Decoder-only models have been stealing the spotlight!\r\n\r\nWe'd be happy to merge the appropriate changes, though",
"Okay, that makes sense. Different priorities! Thanks for the reply Joรฃo."
] | 1,683 | 1,684 | 1,684 |
COLLABORATOR
| null |
### System Info
- `transformers` version: 4.29.1
- Platform: Linux-5.14.0-162.6.1.el9_1.0.1.x86_64-x86_64-with-glibc2.34
- Python version: 3.10.10
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
### Who can help?
@ArthurZucker and @younesbelkada, and maybe also @gante for generation in enc/dec scenarios
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import torch
from transformers import EncoderDecoderModel, OPTConfig, MT5Config, MT5Model, OPTForCausalLM, AutoTokenizer
def init_enc_dec(enc_model_name: str = "google/mt5-small", dec_model_name: str = "facebook/opt-350m"):
config_encoder = MT5Config.from_pretrained(enc_model_name)
config_encoder.is_encoder_decoder = False
config_encoder.add_cross_attention = False
config_encoder.is_decoder = False
config_encoder.num_decoder_layers = 0
config_decoder = OPTConfig.from_pretrained(dec_model_name)
config_decoder.add_cross_attention = True
config_decoder.is_decoder = True
encoder = MT5Model.from_pretrained(enc_model_name, config=config_encoder).get_encoder()
decoder = OPTForCausalLM.from_pretrained(dec_model_name, config=config_decoder)
model = EncoderDecoderModel(encoder=encoder, decoder=decoder)
return model
def main():
model = init_enc_dec()
model.eval()
enc_tokenizer = AutoTokenizer.from_pretrained("google/mt5-small")
dec_tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m")
with torch.no_grad():
inputs = enc_tokenizer("I like bananas", return_tensors="pt")
outputs = model.generate(**inputs)
print(dec_tokenizer.batch_decode(**outputs))
if __name__ == '__main__':
main()
```
This leads to
```
Traceback (most recent call last):
File "/home/local/vanroy/llm-generation/enc_dec.py", line 38, in <module>
main()
File "/home/local/vanroy/llm-generation/enc_dec.py", line 33, in main
outputs = model.generate(**inputs)
File "/home/local/vanroy/llm-generation/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/local/vanroy/llm-generation/.venv/lib/python3.10/site-packages/transformers/generation/utils.py", line 1515, in generate
return self.greedy_search(
File "/home/local/vanroy/llm-generation/.venv/lib/python3.10/site-packages/transformers/generation/utils.py", line 2332, in greedy_search
outputs = self(
File "/home/local/vanroy/llm-generation/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/local/vanroy/llm-generation/.venv/lib/python3.10/site-packages/transformers/models/encoder_decoder/modeling_encoder_decoder.py", line 617, in forward
decoder_outputs = self.decoder(
File "/home/local/vanroy/llm-generation/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
TypeError: OPTForCausalLM.forward() got an unexpected keyword argument 'encoder_hidden_states'
```
### Expected behavior
I am trying to use the encoder-decoder functionality but I am not sure whether I am doing something wrong, or whether OPT is simply not compatible with this architecture.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23350/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23350/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23349
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23349/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23349/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23349/events
|
https://github.com/huggingface/transformers/pull/23349
| 1,708,604,503 |
PR_kwDOCUB6oc5QcEHf
| 23,349 |
Add mPLUG-Owl
|
{
"login": "LukeForeverYoung",
"id": 16715989,
"node_id": "MDQ6VXNlcjE2NzE1OTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/16715989?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LukeForeverYoung",
"html_url": "https://github.com/LukeForeverYoung",
"followers_url": "https://api.github.com/users/LukeForeverYoung/followers",
"following_url": "https://api.github.com/users/LukeForeverYoung/following{/other_user}",
"gists_url": "https://api.github.com/users/LukeForeverYoung/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LukeForeverYoung/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LukeForeverYoung/subscriptions",
"organizations_url": "https://api.github.com/users/LukeForeverYoung/orgs",
"repos_url": "https://api.github.com/users/LukeForeverYoung/repos",
"events_url": "https://api.github.com/users/LukeForeverYoung/events{/privacy}",
"received_events_url": "https://api.github.com/users/LukeForeverYoung/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23349). All of your documentation changes will be reflected on that endpoint.",
"> Thank you very much for this PR and adding this great model! Let us know when this is ready for review, I can see that a lot of CI tests are failing, you should resolve them by removing the `from .xxxx import *` statements in the init file of `mplug_owl` folder\r\n\r\nI added some commits, but I noticed that there are still some failed tests. Among them, tests_torch reports an error \"worker 'gw1' crashed while running 'tests/models/mplug_owl/test_modeling_mplug_owl.py::MplugOwlModelTest::test_forward_signature'\", but I can pass this test in my local environment. I would like to seek some advice. \r\nIn addition, for other tests such as tests_flax, is it necessary for me to pass them?\r\n",
"> I added some commits, but I noticed that there are still some failed tests. Among them, tests_torch reports an error \"worker 'gw1' crashed while running 'tests/models/mplug_owl/test_modeling_mplug_owl.py::MplugOwlModelTest::test_forward_signature'\", but I can pass this test in my local environment. I would like to seek some advice. \r\n\r\nThe job ran out of the available RAM as seen in the picture below.\r\n<img width=\"1052\" alt=\"Screenshot 2023-05-19 064910\" src=\"https://github.com/huggingface/transformers/assets/2521628/04049d9b-84d0-4cfd-bba4-6f82a6081628\">\r\nNote that the CI launched 3 pytest process instead of a single one, so it will use more memory. Also, our CI runner has only 16GB RAM, which might be different from you hardware.\r\n\r\nThe crash here means you might use (some) large values in the test file to create the models that are used for testing.\r\n\r\n> In addition, for other tests such as tests_flax, is it necessary for me to pass them?\r\n\r\nDepending on what kinds of failure. If it is something like import error, yes, we expect the contributor to fix and pass the CI :-). If it is something like Hub Connection error, it's fine, we can leave it.\r\n\r\n",
"> > I added some commits, but I noticed that there are still some failed tests. Among them, tests_torch reports an error \"worker 'gw1' crashed while running 'tests/models/mplug_owl/test_modeling_mplug_owl.py::MplugOwlModelTest::test_forward_signature'\", but I can pass this test in my local environment. I would like to seek some advice.\r\n> \r\n> The job ran out of the available RAM as seen in the picture below. <img alt=\"Screenshot 2023-05-19 064910\" width=\"1052\" src=\"https://user-images.githubusercontent.com/2521628/239440445-04049d9b-84d0-4cfd-bba4-6f82a6081628.png\"> Note that the CI launched 3 pytest process instead of a single one, so it will use more memory. Also, our CI runner has only 16GB RAM, which might be different from you hardware.\r\n> \r\n> The crash here means you might use (some) large values in the test file to create the models that are used for testing.\r\n> \r\n> > In addition, for other tests such as tests_flax, is it necessary for me to pass them?\r\n> \r\n> Depending on what kinds of failure. If it is something like import error, yes, we expect the contributor to fix and pass the CI :-). If it is something like Hub Connection error, it's fine, we can leave it.\r\n\r\nThank you for your help, now all the checks have passed.",
"Awesome @LukeForeverYoung !\r\nIs the PR ready for a first review?",
"> Awesome @LukeForeverYoung !\n> Is the PR ready for a first review?\n\nYes",
"> Hi @LukeForeverYoung Let us know if you need any help finishing up the PR and if you have more questions!\r\n\r\nSorry for not being able to reply recently. Our team has been swamped with work, which causing that the process of integrating mPLUG-Owl into transformers may be paused for a long time. Thank you very much for your review. It would be great if anyone is willing to take over.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,683 | 1,692 | 1,692 |
NONE
| null |
# What does this PR do?
This PR adds the mPLUG-Owl model from [X-PLUG/mPLUG-Owl](https://github.com/X-PLUG/mPLUG-Owl) which is a multi-modal large language model outperforms LLAVA and mini-GPT4.
Here is a code shows how to play with it:
```Python
from transformers import MplugOwlForConditionalGeneration, MplugOwlProcessor
from PIL import Image
import requests
import torch
model = MplugOwlForConditionalGeneration.from_pretrained("MAGAer13/mplug-owl-llama-7b")
processor = MplugOwlProcessor.from_pretrained("MAGAer13/mplug-owl-llama-7b")
prompts = [
'''The following is a conversation between a curious human and AI assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
Human: <image>
Human: how many cats are there?
AI: ''']
image_list = ['http://images.cocodataset.org/val2017/000000039769.jpg']
images = [Image.open(requests.get(_, stream=True).raw) for _ in image_list]
inputs = processor(prompts, images, return_tensors='pt')
inputs = inputs.to('cuda')
model = model.to('cuda').half()
res = model.generate(**inputs, max_length=512, num_beams=1)
print(processor.decode(True,token_ids=res.tolist()[0]))
```
<!-- Remove if not applicable -->
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23349/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23349/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23349",
"html_url": "https://github.com/huggingface/transformers/pull/23349",
"diff_url": "https://github.com/huggingface/transformers/pull/23349.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23349.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23348
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23348/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23348/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23348/events
|
https://github.com/huggingface/transformers/pull/23348
| 1,708,579,337 |
PR_kwDOCUB6oc5Qb_WW
| 23,348 |
Add support for GIT model in VQA pipelines
|
{
"login": "marechaux",
"id": 7255060,
"node_id": "MDQ6VXNlcjcyNTUwNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7255060?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marechaux",
"html_url": "https://github.com/marechaux",
"followers_url": "https://api.github.com/users/marechaux/followers",
"following_url": "https://api.github.com/users/marechaux/following{/other_user}",
"gists_url": "https://api.github.com/users/marechaux/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marechaux/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marechaux/subscriptions",
"organizations_url": "https://api.github.com/users/marechaux/orgs",
"repos_url": "https://api.github.com/users/marechaux/repos",
"events_url": "https://api.github.com/users/marechaux/events{/privacy}",
"received_events_url": "https://api.github.com/users/marechaux/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23348). All of your documentation changes will be reflected on that endpoint.",
"I have two remarks here : \r\n* The CI is red due to this change : https://github.com/huggingface/transformers/pull/23348/files#diff-b452cc4f093e4b991e92054cf7d504edab44be07c4957969df85a5562238313cR48 the model now used in test is `hf-internal-testing/tiny-random-ViltForQuestionAnswering` instead of `hf-internal-testing/tiny-vilt-random-vqa` (and it seems like the new model image processor can't process images, I am not sure about how to fix it)\r\n* The test for GIT in VQA pipeline doesn't run because `hf-internal-testing/tiny-random-GitForVisualQuestionAnswering` doesn't exist, I need help about this point as well ",
"Thanks for the review, I updated my changes following your comments. However, I have several doubt on my approach. \r\n\r\n### Beam search for scores \r\nI use beam scores to provide a score to follow the \"signature\" of the pipeline described [here](https://github.com/huggingface/transformers/blob/c3c9b03d55f2d8094a2ac058db566d469baa8bbd/src/transformers/pipelines/visual_question_answering.py#L106-L110). Is it a correct ? \r\n\r\nThe beam search is so slow that it makes the pipeline test [timeout](https://app.circleci.com/pipelines/github/huggingface/transformers/65019/workflows/be9d4fde-5cbb-4db0-ba85-0110d8988953/jobs/806494), it runs locally but in more than 120s, which make me think I'm not in the right way here\r\n\r\n### Tokenizer padding\r\nAlso when I use the pipeline with `microsoft/git-base-textvqa` in batch mode, I have this warning : \r\n```\r\nA decoder-only architecture is being used, but right-padding was detected! For correct generation results, please set `padding_side='left'` when initializing the tokenizer.\r\n```\r\nThis warning is legitimate, but I don't know how to fix it as `padding_side` can only be set at tokenizer init. \r\n\r\n### Broken `Vilt` tests\r\nDue to [this change](https://github.com/huggingface/transformers/pull/23348/files#diff-b452cc4f093e4b991e92054cf7d504edab44be07c4957969df85a5562238313cR48) , the model used in unit test for Vilt model is now `hf-internal-testing/tiny-random-ViltForQuestionAnswering` instead of `hf-internal-testing/tiny-vilt-random-vqa`.\r\n\r\nIt seems like the new model image processor can't process images, I am not sure about how to fix it\r\n\r\nShould I fix the model directly in the hub ? ",
"Gently ping here @NielsRogge :-)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,683 | 1,688 | 1,688 |
NONE
| null |
# What does this PR do?
This PR implement support for generative model in VQA pipeline (and more precisely GIT model).
Fixes part of #21110
This is my first contribution here, I am uncertain if my approach is correct. Please advise me if any modifications are necessary ๐
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. => #21110
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@NielsRogge
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23348/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23348/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23348",
"html_url": "https://github.com/huggingface/transformers/pull/23348",
"diff_url": "https://github.com/huggingface/transformers/pull/23348.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23348.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23347
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23347/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23347/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23347/events
|
https://github.com/huggingface/transformers/issues/23347
| 1,708,476,235 |
I_kwDOCUB6oc5l1UdL
| 23,347 |
Still cannot import cached_path
|
{
"login": "Fshrink",
"id": 132883741,
"node_id": "U_kgDOB-ulHQ",
"avatar_url": "https://avatars.githubusercontent.com/u/132883741?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Fshrink",
"html_url": "https://github.com/Fshrink",
"followers_url": "https://api.github.com/users/Fshrink/followers",
"following_url": "https://api.github.com/users/Fshrink/following{/other_user}",
"gists_url": "https://api.github.com/users/Fshrink/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Fshrink/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Fshrink/subscriptions",
"organizations_url": "https://api.github.com/users/Fshrink/orgs",
"repos_url": "https://api.github.com/users/Fshrink/repos",
"events_url": "https://api.github.com/users/Fshrink/events{/privacy}",
"received_events_url": "https://api.github.com/users/Fshrink/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I don't think that that exists. Did you mean `default_cache_path` or maybe `TRANSFORMERS_CACHE`? Those are in file_utils (and actually utils/hub)\r\n\r\n```python\r\nfrom transformers.file_utils import default_cache_path, TRANSFORMERS_CACHE\r\n```\r\n\r\nhttps://github.com/huggingface/transformers/blob/cf11493dce0a1d22446efe0d6c4ade02fd928e50/src/transformers/utils/hub.py#L80-L102",
"Bram,\r\nThanks so much for answering!\r\nThis is what I get from running the model:\r\n\r\n\r\n\r\nWhich refers to this part of the >>>miniconda3/envs/ST2/lib/python3.11/site-packages/simpletransformers/conv_ai/conv_ai_utils.py:\r\n\r\n\r\n\r\nAnd that's where all stops since this function is used across the script and creates objects that are used by the model.py.\r\n\r\nOf course this is a simpletransformers code , but it requests an import from transformers that apparently does not exist...\r\n\r\nThanks!\r\n\r\n",
"Unfortunately that is not a problem of `transformers` but of `simpletransformers`. In their requirements, they specify `\"transformers>=4.6.0\"` so it is possible that they do not test against/support the most recent versions.\r\n\r\nhttps://github.com/ThilinaRajapakse/simpletransformers/blob/365b27feb27e8337a7f4f0244eff8683c5763ef8/setup.py#L29\r\n\r\nI suggest that you try 4.6.0 and if that does not work, you should ask them on their repository because there is not much that can be done on this end.\r\n",
"Will do. Appreciate it.",
"Thanks @BramVanroy! And best of luck @Fshrink with getting your `simpletransformers` script working @Fshrink!"
] | 1,683 | 1,684 | 1,684 |
NONE
| null |
### System Info
- `transformers` version: 4.29.1
- Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.36
- Python version: 3.11.3
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.6.10 (cpu)
- Jax version: 0.4.10
- JaxLib version: 0.4.10
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: ?
### Who can help?
@sanchit-gandhi I guess...
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import cached_path
be it as a single instruction in ipython or when using the ConvAIModel from simpletransformers
Tried all versions from before 4.22.0
### Expected behavior
Import the method
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23347/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23347/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23346
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23346/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23346/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23346/events
|
https://github.com/huggingface/transformers/issues/23346
| 1,708,447,515 |
I_kwDOCUB6oc5l1Ncb
| 23,346 |
Asynchronous CUDA Execution Issue with Hugging Face Transformers
|
{
"login": "taehyunzzz",
"id": 33646149,
"node_id": "MDQ6VXNlcjMzNjQ2MTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/33646149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taehyunzzz",
"html_url": "https://github.com/taehyunzzz",
"followers_url": "https://api.github.com/users/taehyunzzz/followers",
"following_url": "https://api.github.com/users/taehyunzzz/following{/other_user}",
"gists_url": "https://api.github.com/users/taehyunzzz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taehyunzzz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taehyunzzz/subscriptions",
"organizations_url": "https://api.github.com/users/taehyunzzz/orgs",
"repos_url": "https://api.github.com/users/taehyunzzz/repos",
"events_url": "https://api.github.com/users/taehyunzzz/events{/privacy}",
"received_events_url": "https://api.github.com/users/taehyunzzz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey! Thanks for opening an issue. For other people who might wonder, this is because of the inner workings of accelerate, which are activated by the call to `device_map=\"auto\"`, meaning that the model will be dispatched between both CPU and GPU. "
] | 1,683 | 1,685 | 1,683 |
NONE
| null |
### System Info
I am reaching out to you regarding an issue I've been experiencing with the Hugging Face Transformers library in a PyTorch environment. I'm encountering unexpected CUDA synchronizations while executing my code, which seems to be impairing the performance of my model. I am hopeful that you can provide some guidance on this matter.
As a bit of background, I am utilizing the Deepspeed integration for running inference of pre-trained Switch-Transformers with 8 experts model from the Hugging Face library for a project. This involves processing large amounts of data and hence necessitates efficient GPU utilization for timely results. To maximize the GPU's potential, I have been aiming to leverage CUDA's asynchronous execution feature.
In CUDA, as you know, the CPU queues kernels for execution on the GPU. Ideally, while the CPU is busy queueing up kernels, the GPU should be asynchronously running the kernels that have already been queued. This is the behavior I have previously observed when using the Megatron-LM library.
However, when using the Hugging Face Transformers library, I am finding that there seems to be a CUDA synchronize call after every kernel, which effectively serializes the CPU and GPU operations. This has led to a significant decrease in processing speed and efficiency, as the GPU is left idle while the CPU prepares the next kernel.
I am unsure whether this is due to an issue with my code or if it's an inherent characteristic of the Hugging Face Transformers library. I was wondering if you might have any insights into this issue or any suggestions for further troubleshooting steps I could take. Does the implementation of Switch Transformer Model have implicit synchronization that I might not be aware of?
### Who can help?
@ArthurZucker @younesbelkada @stas00
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Wrap the pretrained switch model with deepspeed initialize
2. Take the first Switch Layer
3. Initialize a random tensor to run forward on
4. Profile using Pytorch profiler
5. Behavior of profile seems as if CPU waits for the CUDA kernel to finish

Below is the code I ran :
```
from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration
from transformers.deepspeed import HfDeepSpeedConfig
# ds_config_file = "ds_zero_stage_0_config.json"
# ds_config_file = "ds_zero_stage_infinity-cpu.json"
ds_config_file = "ds_zero_stage_2_config.json"
with open(ds_config_file) as fin:
ds_config = json.load(fin)
tokenizer = AutoTokenizer.from_pretrained("google/switch-base-8")
model = SwitchTransformersForConditionalGeneration.from_pretrained("google/switch-base-8", device_map="auto")
dschf = HfDeepSpeedConfig(ds_config)
model, optimizer, _, _ = deepspeed.initialize(
config = ds_config,
model = model,
model_parameters = [{
"params": [p for n,p in list(model.named_parameters())],
"name" : "base",
"weight_decay" : 0.01
}]
)
model.eval()
# Get a switch layer
switch_layer = model.encoder.block[1].layer[1].mlp
batch_size = 1000
seq_len = 1000
d_model = model.module.config.d_model
d_ff = model.module.config.d_ff
din = torch.rand(size=(batch_size, seq_len, d_model),
dtype=torch.float32,
device="cuda:0"
)
#hidden_states, (router_logits, expert_index)
with profile(activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],
record_shapes=True, profile_memory=True) as prof:
for i in range(10):
dout = switch_layer(din)
prof.export_chrome_trace("trace_check.json")
```
### Expected behavior
I would like to know if there are implicit synchronization in the switch model implementation and why the profiled result show that CPU synchronizes with the CUDA runtime?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23346/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23346/timeline
|
not_planned
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23345
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23345/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23345/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23345/events
|
https://github.com/huggingface/transformers/issues/23345
| 1,708,417,698 |
I_kwDOCUB6oc5l1GKi
| 23,345 |
Prompt tuning for Dolly-v2-7b model for Question and Answer not supported?
|
{
"login": "pratikchhapolika",
"id": 11159549,
"node_id": "MDQ6VXNlcjExMTU5NTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11159549?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pratikchhapolika",
"html_url": "https://github.com/pratikchhapolika",
"followers_url": "https://api.github.com/users/pratikchhapolika/followers",
"following_url": "https://api.github.com/users/pratikchhapolika/following{/other_user}",
"gists_url": "https://api.github.com/users/pratikchhapolika/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pratikchhapolika/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pratikchhapolika/subscriptions",
"organizations_url": "https://api.github.com/users/pratikchhapolika/orgs",
"repos_url": "https://api.github.com/users/pratikchhapolika/repos",
"events_url": "https://api.github.com/users/pratikchhapolika/events{/privacy}",
"received_events_url": "https://api.github.com/users/pratikchhapolika/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @pratikchhapolika, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.",
"> Hi @pratikchhapolika, thanks for raising an issue!\r\n> \r\n> This is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.\r\n\r\nHi @amyeroberts , Since I did not get any response in forums so thought to ask here. ",
"@pratikchhapolika I understand, however the github issues are still reserved for feature requests and bugs as it's not sustainable for everyone to ask here if there isn't a response on the forum.\r\n\r\nAnother place to ask for help on questions such as these are on the [discord forum](https://t.co/1n75wi976V?amp=1). Specifically, there's an `ask-for-help` channel which is very active. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,683 | 1,687 | 1,687 |
NONE
| null |
I am following this page for `Prompt tuning for Dolly-v2-7b model for Question and Answer`: https://huggingface.co/docs/peft/task_guides/clm-prompt-tuning
Instead of doing the training in old `pytorch way`. I am doing the training using `Trainer api`. Also in this link
https://huggingface.co/stevhliu/bloomz-560m_PROMPT_TUNING_CAUSAL_LM/tree/main , I see 2 files `adapter_config.json` and `adapter_model.bin`.
But when I save the model using Trainer api I do not see any `config file`. Also model size is bigger than what is shown in above link.
Is this correct way to **train**, **save** and **load** model for Prompt Tuning. ?
**The inference take lot of time to generate. and gives some gibberish output**
### Who can help?
@stevhliu @sgugger @lvwerra
Here is my code:
The use-case is:
I have `Context` which has lot of paragraphs and then `Question` , the model has to `answer` the `Question` based on `Context` in a professional manner. Also can it classify the `Question` as **relevant** if answer is present in `Context` and **irrelevant** if `answer` is not in `Context`
The code that I have written is:
```peft_config = PromptTuningConfig(
task_type=TaskType.CAUSAL_LM,
prompt_tuning_init=PromptTuningInit.TEXT,
num_virtual_tokens=30,
prompt_tuning_init_text="Answer the question as truthfully as possible using and only using the provided context and if the answer is not contained within the context/text, say Irrelevant",
tokenizer_name_or_path="dolly-v2-7b"
)
```
```
tokenizer = AutoTokenizer.from_pretrained("dolly-v2-7b")
model = AutoModelForCausalLM.from_pretrained("dolly-v2-7b",load_in_8bit=True,device_map='auto') #,load_in_8bit=True
```
`model = get_peft_model(model, peft_config)`
```
train_data = [
{
"Context": "How to Link Credit Card to ICICI Bank Account Step 1: Login to ICICIBank.com using your existing internet banking credentials. Step 2: Go to the 'Service Request' section. Step 3: Visit the 'Customer Service' option. Step 4: Select the Link Accounts/ Policy option to link your credit card to the existing user ID.",
"Question": "How to add card?",
"Answer": "Relevant. To add your card you can follow these steps: Step 1: Login to ICICIBank.com using your existing internet banking credentials. Step 2: Go to the 'Service Request' section. Step 3: Visit the 'Customer Service' option. Step 4: Select the Link Accounts/ Policy option to link your credit card to the existing user ID."
},
{
"Context": "The python programming language is used in many different fields including web development, data analysis, artificial intelligence and scientific computing. It is a high-level language that is easy to learn and has a large community of users who can provide support and advice. ",
"Question": "What is Python used for?",
"Answer": "Relevant. Python is used in many different fields including web development, data analysis, artificial intelligence and scientific computing."
}
]
```
# Define a function to map examples to inputs and targets
```
def preprocess_function(examples):
tokenized_examples = tokenizer(
examples["Question"][0],
examples["Context"][0],
truncation=True,
max_length=1024,
padding="max_length"
)
tokenized_examples['labels']=tokenizer(
examples["Answer"],
truncation=True,
max_length=1024,
padding="max_length",
return_tensors="pt")['input_ids'][0]
return tokenized_examples
```
`tokenized_train_data = [preprocess_function(example) for example in train_data]`
```
class DemoDataset(Dataset):
def __init__(self, data):
self.data = data
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
sample = self.data[idx]
item = {k: torch.tensor(v) for k, v in sample.items()}
return item
```
`dataset = DemoDataset(tokenized_train_data)`
```
training_args = TrainingArguments(
output_dir="results",
learning_rate=1e-5,
per_device_train_batch_size=1,
num_train_epochs=10,
weight_decay=0.01,
logging_steps=1,
save_steps=1,
logging_dir="logs"
)
```
```
trainer = Trainer(
model=model,
args=training_args,
train_dataset=dataset,
# data_collator=data_collator,
tokenizer=tokenizer
)
trainer.train()
```
**Is this correct way to save?**
`trainer.save_model("dolly3b_demo_model")`
**Inference**
**Is this correct way to do inference**
```
from peft import PeftModel, PeftConfig
tokenizer = AutoTokenizer.from_pretrained("dolly-v2-3b")
model = AutoModelForCausalLM.from_pretrained("dolly3b_demo_model")
model = get_peft_model(model, peft_config)
```
```
# Define example
context = "How to Link Credit Card to ICICI Bank Account Step 1: Login to ICICIBank.com using your existing internet banking credentials. Step 2: Go to the 'Service Request' section. Step 3: Visit the 'Customer Service' option. Step 4: Select the Link Accounts/ Policy option to link your credit card to the existing user ID."
question = "How to add card?"
# Encode inputs with prompt and tokenize
inputs = [f"{context} {question}"]
inputs_encoded = tokenizer(inputs, padding=True, truncation=True, max_length=1024, return_tensors="pt")
```
```
outputs = model.generate(input_ids=inputs_encoded["input_ids"], attention_mask=inputs_encoded["attention_mask"], max_new_tokens=200,)
print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True))
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23345/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23345/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23344
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23344/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23344/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23344/events
|
https://github.com/huggingface/transformers/issues/23344
| 1,708,375,864 |
I_kwDOCUB6oc5l0784
| 23,344 |
Whisper processor no longer saves mel_filters with `.save_pretrained()`
|
{
"login": "xenova",
"id": 26504141,
"node_id": "MDQ6VXNlcjI2NTA0MTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xenova",
"html_url": "https://github.com/xenova",
"followers_url": "https://api.github.com/users/xenova/followers",
"following_url": "https://api.github.com/users/xenova/following{/other_user}",
"gists_url": "https://api.github.com/users/xenova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xenova/subscriptions",
"organizations_url": "https://api.github.com/users/xenova/orgs",
"repos_url": "https://api.github.com/users/xenova/repos",
"events_url": "https://api.github.com/users/xenova/events{/privacy}",
"received_events_url": "https://api.github.com/users/xenova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"That change was made before the update you mentioned, in https://github.com/huggingface/transformers/pull/21267\r\n\r\nIt is not necessary to save the mel filters in the JSON file since they are completely defined by the other properties from that JSON file. Plus it makes the JSON file huge and unreadable.\r\n\r\nAs far as I'm aware, the PyTorch implementation of Whisper will load JSON config files with or without the mel filters just fine. If this breaks in Transformers.js, then the issue would seem to be there.\r\n",
"Okay thanks ๐ I agree, it does make it quite unreadable; I just thought I would mention it since it causes a mismatch with some of the official whisper models on the hub."
] | 1,683 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.29.1
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.0+cu118 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (cpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@hollance @sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Running this code:
```python
from transformers import AutoProcessor
processor = AutoProcessor.from_pretrained('openai/whisper-tiny')
processor.save_pretrained('test')
```
will output the following `preprocessor_config.json`:
```json
{
"chunk_length": 30,
"feature_extractor_type": "WhisperFeatureExtractor",
"feature_size": 80,
"hop_length": 160,
"n_fft": 400,
"n_samples": 480000,
"nb_max_frames": 3000,
"padding_side": "right",
"padding_value": 0.0,
"processor_class": "WhisperProcessor",
"return_attention_mask": false,
"sampling_rate": 16000
}
```
which does not include `mel_filters`. This is different to the official models saved on the hub, which do include this: https://huggingface.co/openai/whisper-tiny/blob/main/preprocessor_config.json
This is due to the following recent update: https://github.com/huggingface/transformers/commit/7f9195090160d508c7afb2e444e34f181872dd10
Linked issue: https://github.com/xenova/transformers.js/issues/107
### Expected behavior
Saving the processor should also save the `mel_filters`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23344/timeline
|
not_planned
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23343
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23343/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23343/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23343/events
|
https://github.com/huggingface/transformers/pull/23343
| 1,708,289,668 |
PR_kwDOCUB6oc5QbEtp
| 23,343 |
Removing one of the twice defined position_embeddings in LongFormer
|
{
"login": "GregorySenay",
"id": 6250371,
"node_id": "MDQ6VXNlcjYyNTAzNzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6250371?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GregorySenay",
"html_url": "https://github.com/GregorySenay",
"followers_url": "https://api.github.com/users/GregorySenay/followers",
"following_url": "https://api.github.com/users/GregorySenay/following{/other_user}",
"gists_url": "https://api.github.com/users/GregorySenay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GregorySenay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GregorySenay/subscriptions",
"organizations_url": "https://api.github.com/users/GregorySenay/orgs",
"repos_url": "https://api.github.com/users/GregorySenay/repos",
"events_url": "https://api.github.com/users/GregorySenay/events{/privacy}",
"received_events_url": "https://api.github.com/users/GregorySenay/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,683 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
The self.position_embeddings in LongFormerEmbeddings is defined twice. Removing the first without padding_idx
l. 451/452
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline]
## Who can review?
- text models: @ArthurZucker and @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23343/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23343/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23343",
"html_url": "https://github.com/huggingface/transformers/pull/23343",
"diff_url": "https://github.com/huggingface/transformers/pull/23343.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23343.patch",
"merged_at": 1684143355000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23342
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23342/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23342/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23342/events
|
https://github.com/huggingface/transformers/pull/23342
| 1,708,141,947 |
PR_kwDOCUB6oc5Qaj_v
| 23,342 |
[WIP] Add tf swiftformer
|
{
"login": "joaocmd",
"id": 5345834,
"node_id": "MDQ6VXNlcjUzNDU4MzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5345834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joaocmd",
"html_url": "https://github.com/joaocmd",
"followers_url": "https://api.github.com/users/joaocmd/followers",
"following_url": "https://api.github.com/users/joaocmd/following{/other_user}",
"gists_url": "https://api.github.com/users/joaocmd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joaocmd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joaocmd/subscriptions",
"organizations_url": "https://api.github.com/users/joaocmd/orgs",
"repos_url": "https://api.github.com/users/joaocmd/repos",
"events_url": "https://api.github.com/users/joaocmd/events{/privacy}",
"received_events_url": "https://api.github.com/users/joaocmd/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23342). All of your documentation changes will be reflected on that endpoint.",
"Hi @joaocmd, \r\nRapid work on opening the TF port! Let me or @Rocketknight1 know when the PR is ready for review or you experience any issues when porting. ",
"Hi @Rocketknight1, could I get some pointers as to why I get errors like in most of the tests:\r\n\r\n```\r\nE ValueError: Exception encountered when calling layer 'tf_swift_former_model_18' (type TFSwiftFormerModel).\r\nE \r\nE The following keyword arguments are not supported by this model: ['input_ids'].\r\nE \r\nE Call arguments received by layer 'tf_swift_former_model_18' (type TFSwiftFormerModel):\r\nE โข pixel_values={'pixel_values': 'tf.Tensor(shape=(13, 224, 224, 3), dtype=float32)'}\r\nE โข output_hidden_states=None\r\nE โข return_dict=None\r\nE โข training=False\r\n\r\nsrc/transformers/modeling_tf_utils.py:500: ValueError\r\n```\r\n\r\nThe PyTorch model has this following docstring but I don't see where the input_ids part is being taken care of.\r\n```py\r\n\"\"\"\r\n Here we also overwrite some of the tests of test_modeling_common.py, as SwiftFormer does not use input_ids, inputs_embeds,\r\n attention_mask and seq_length.\r\n\"\"\"\r\n```\r\n\r\nThanks!",
"It seems like it is entering in the `else` statement at line 581 of `src/transformers/modeling_tf_utils.py`:\r\n\r\n```python\r\nif \"args\" in output:\r\n if output[\"args\"] is not None and is_tf_symbolic_tensor(output[\"args\"]):\r\n tensor_name = output[\"args\"].name.split(\":\")[0]\r\n output[tensor_name] = output[\"args\"]\r\n else:\r\n # `args` in this case is always the first parameter, then `input_ids`\r\n output[\"input_ids\"] = output[\"args\"]\r\n\r\n del output[\"args\"]\r\n```\r\n\r\nThus it is injecting the `input_ids` argument into the dictionary.\r\n\r\n@amyeroberts @Rocketknight1 How should I get around this? It must be some misconfiguration in my tests or models.\r\n\r\n",
"@joaocmd Just looking at the error and the CI runs, I think the issue might be a missing `@unpack_inputs` decorator on the call method for the MainLayer class",
"> @joaocmd Just looking at the error and the CI runs, I think the issue might be a missing `@unpack_inputs` decorator on the call method for the MainLayer class\r\n\r\nThank you @amyeroberts! It seems like that wasn't causing any issue (yet), but thanks to your comment I found out that I had a duplicate `@unpack_inputs` in one of the models.",
"Hi @amyeroberts and @Rocketknight1, can I get some help with the tests that are still failing? I'm getting `ValueError: cannot reshape array of size 10368 into shape (3,3,3,24)` for these two tests:\r\n* `tests/models/swiftformer/test_modeling_tf_swiftformer.py::TFSwiftFormerModelTest::test_compile_tf_model`\r\n* `tests/models/swiftformer/test_modeling_tf_swiftformer.py::TFSwiftFormerModelTest::test_save_load`\r\n\r\nBut I don't understand exactly what is being reshaped into the wrong shape. Could I get some insight as to what these tests are doing and why it might be failing? Thanks!",
"Hi @joaocmd, there's been some large updates to our TF models regarding how they're built - @Rocketknight1 can give you more details :) \r\n\r\nAre these errors happening if you rebase on `main`? ",
"> Hi @joaocmd, there's been some large updates to our TF models regarding how they're built - @Rocketknight1 can give you more details :)\r\n> \r\n> Are these errors happening if you rebase on `main`?\r\n\r\nHi @amyeroberts, just rebased the branch. I think it's failing on the same tests but the error on these two tests changed to:\r\n```\r\nNotImplementedError: Could not infer input image shape from config, please override input_signature to specify input shapes.\r\n```\r\n\r\nLooking at the stack trace it seems like the image size should have been specified:\r\n```python\r\nif hasattr(vision_config, \"image_size\"):\r\n pixel_values_shape[2] = pixel_values_shape[3] = vision_config.image_size\r\nelif hasattr(vision_config, \"input_size\"):\r\n pixel_values_shape[2] = pixel_values_shape[3] = vision_config.input_size\r\nelse:\r\n raise NotImplementedError( # <------ this error here\r\n \"Could not infer input image shape from config, please override input_signature to specify input shapes.\"\r\n )\r\n```\r\n\r\nShouldn't this also affect the original model?\r\n",
"@joaocmd Regarding the error, no, it shouldn't affect the original model. `image_size` is a parameter we add in the configs, even if it's not always used by the model as it's often important for parameterizing other things or understanding. We allow [this here](https://github.com/huggingface/transformers/blob/468aed39afffafe417819a309a4e6d45d2a9e8f4/utils/check_config_attributes.py#L187). It should have been added, and we can add in this PR, but the PT model can do without. \r\n\r\nYou'll notice that the error is being raise in `modeling_tf_utils.py`. This is because when constructing a TF model, we have to pass in dummy inputs to build it. In PyTorch this isn't necessary, because we explicitly set the input and output dimensions when creating each layer, so the weight matrices can be created immediately. `image_size` is needed to know the shape of the inputs to pass in. \r\n\r\nAs a side note, did you force push after rebasing? From the PR history, it looks like you might not have. As rebasing is a form of \"rewriting history\" it's necessary to force push.\r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"Thanks @amyeroberts, understood. As for the rebase, I had not done one in quite some time and it seems like I did mess it up. I think that is now fixed.\r\n\r\nSince I started this PR I have had a fundamental question about huggingface's approach to tensorflow models. The default in TensorFlow is NHWC while in PyTorch it is NCHW, how should I approach this difference in my PR? Based on `modeling_tf_vit.py` I suppose the correct approach is to assume that images are given in PyTorch format and transpose them in the first block, is that so? How does that affect the following blocks?\r\nAlso, if we were implementing a model for semantic segmentation, which would return an image with the same size as the original one, would that be returned in the PyTorch format or the default TensorFlow format?\r\n\r\nThank you!",
"@joaocmd The pattern we use for the TF vision models is to transpose the NCHW format in the first MainLayer class e.g. [here](https://github.com/huggingface/transformers/blob/868363abb9e72a638b4710d1f5ef1199839b3eec/src/transformers/models/resnet/modeling_tf_resnet.py#L337C18-L337C18) and then transpose back, if pixel values are returned e.g. [here](https://github.com/huggingface/transformers/blob/868363abb9e72a638b4710d1f5ef1199839b3eec/src/transformers/models/resnet/modeling_tf_resnet.py#L348). For some of the older models e.g. ViT this pattern may not have been applied, as these were the first models to be added. \r\n\r\nThis pattern means the model is written in the TF compatible NHWC format throughout, but all of our vision models accept and return images in NCHW. ",
"Thank you @amyeroberts, that makes sense. I've already updated it to match the pattern.\r\n\r\nI'm still having some trouble with the `test_compile_tf_model`. Initially it was failing because it was passing a shape `(None, 56, 56, 48)` to a `reshape` (https://github.com/huggingface/transformers/pull/23342/commits/204e216e6047d83775cfb5f0d928b378b73d2e84#diff-7f093399e807b53ca4b63460f610dcc550c2937cb18cd513d71dc49ce6e1b699R385).\r\nI changed the line to use `[-1, width * height, channels]` as shape, which seems like it fixed that case. However, now it is failing because a shape `(None, None, None, 48)` is being passed to that reshape call. Is this expected of this test? According to the stack trace it seems like it's being triggered by a `tf.keras.Model.save()` (https://github.com/joaocmd/transformers/blob/add_tf_swiftformer/tests/test_modeling_tf_common.py#L711).\r\n\r\nI've also noticed that there was an overhaul to the serving and dummy_inputs interface (https://github.com/huggingface/transformers/commit/814de8fac7456bd2ce50d1847505da829761bfdc). But maybe @Rocketknight1 can better explain the consequences of this change to mine (and other) PRs.",
"@joaocmd Yes, there was a big refactor of the `serving_output` logic. For most models, there's no need to have `serving_output`, `dummy_inputs` or `serving` implemented. You should be able to remove these and have the `test_prepare_serving_output` test pass. \r\n\r\nLooking at the CI run, I don't see `test_compile_tf_model` failing. Were you able to resolve? Or perhaps are you refering to `test_save_load`?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @amyeroberts! Sorry for the late response as I've been quite busy... It was failing more tests on my local machine than on the CI run, but after merging the main branch locally they now seem to match.\r\nI am currently struggling with `test_save_load`:\r\n```\r\nValueError: cannot reshape array of size 10368 into shape (3,3,3,24)\r\n```\r\n\r\nI can't find the reason for this error. I set up a breakpoint and found that the `symbolic_weight_name` at that point is `kernel:0`, so I assume it belongs to some convolutional layer, but I didn't get any further than that. Do you have any suggestions? Thank you!\r\n\r\nEdit:\r\n\r\nI believe the weight belongs to the patch embeddings layer, which I initialized with a `tf.keras.Sequential` call:\r\n```python\r\nself.patch_embedding = tf.keras.Sequential(\r\n [\r\n tf.keras.layers.ZeroPadding2D(padding=(1, 1)),\r\n tf.keras.layers.Conv2D(out_chs // 2, kernel_size=3, strides=2),\r\n tf.keras.layers.BatchNormalization(\r\n epsilon=config.batch_norm_eps, momentum=0.9\r\n ), # FIXME: is this the equivalent momentum?\r\n tf.keras.layers.Activation(\"relu\"),\r\n tf.keras.layers.ZeroPadding2D(padding=(1, 1)),\r\n tf.keras.layers.Conv2D(out_chs, kernel_size=3, strides=2),\r\n tf.keras.layers.BatchNormalization(\r\n epsilon=config.batch_norm_eps, momentum=0.9\r\n ), # FIXME: is this the equivalent momentum?\r\n tf.keras.layers.Activation(\"relu\"),\r\n ],\r\n name=\"patch_embeddings\",\r\n)\r\n```\r\n\r\nI think the problem is that both `Conv2D` are being given the same name, what is the correct approach for this? Should I rewrite the pytorch version to not use `nn.Sequential`?",
"@joaocmd I would suggest rewriting the torch version to not use sequential, but only for the purposes of debugging i.e. we wouldn't commit these changes to main. This way you'll be able to go line by line comparing the TF and PT outputs and seeing where any shape differences are coming from. ",
"Hi @amyeroberts I might be misunderstanding something but I think `test_save_load` does not test any PyTorch to TensorFlow equivalence. I think the problem is that when the two convolutional layers inside the `Sequential` module are saved they are stored under the same name, so a shape mismatch happens. Do I understand this correctly?",
"@joaocmd Ah, apologies, I misread your comment. Yes, I believe you're right about the the naming issue. What I suggest is follow the pattern in other ports where `nn.Sequential` has been used. For example in deit, for the [sequential block in PT](https://github.com/huggingface/transformers/blob/3f9cb335047315edfd4b6ad666ef148e98cc4850/src/transformers/models/deit/modeling_deit.py#L587), a [new layer is implemented for TF](https://github.com/huggingface/transformers/blob/3f9cb335047315edfd4b6ad666ef148e98cc4850/src/transformers/models/deit/modeling_tf_deit.py#L698).",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@joaocmd From next week I'll be off for a few weeks. If you have any implementation questions, please ask @Rocketknight1 :) ",
"Hi @Rocketknight1, could you give me some pointers on the three remaining tests? I haven't looked closely into `test_modeling_tf_swiftformer.py::TFSwiftFormerModelIntegrationTest::test_inference_image_classification_head` yet because I think it makes sense to leave that one for last, but correct me if I'm wrong.\r\n\r\nHowever, I am trying to understand what is wrong with `TFSwiftFormerModelTest::test_save_load - AssertionError: 0.42552373 not less than or equal to 1e-05` but I have come to no conclusion yet.\r\n\r\nThere is also this current error that might be due to some misnamed layers, but I am not sure: `tests/models/swiftformer/test_modeling_tf_swiftformer.py::TFSwiftFormerModelTest::test_pt_tf_model_equivalence - AttributeError: patch_embed.patch_embeddings.0.weight not found in PyTorch model`.\r\n\r\nThank you!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@joaocmd Are you still working on this? If so, @Rocketknight1 could you help?",
"Hi @amyeroberts , I haven't made any changes since my last comment as I was stuck and had some other responsibilities. I would like to finish the issue especially because I believe it's very close to finishing.",
"Hi, I'm sorry, I'm not sure how I missed your last comment - this is entirely my fault! Let me investigate the errors you were getting and I'll see if we can get this PR over the line.",
"Hi @joaocmd, I just took a look now. The cause of the errors is that weight names are not being matched up correctly between the saved checkpoint and the model. The reason for that is that, annoyingly, you need to pass `name` args to all the layers you create.\r\n\r\nFor example, in this block:\r\n\r\n```python\r\nfor i in range(len(layer_depths)):\r\n stage = TFSwiftFormerStage(config, index=i)\r\n self.network.append(stage)\r\n if i >= len(layer_depths) - 1:\r\n break\r\n if downsamples[i] or embed_dims[i] != embed_dims[i + 1]:\r\n # downsampling between two stages\r\n self.network.append(TFSwiftFormerEmbeddings(config, index=i))\r\n```\r\nBoth `TFSwiftFormerStage` and `TFSwiftFormerEmbeddings` will need a `name` argument here. When setting the `name` args, we generally try to choose them so they match up with the PyTorch state dict, so that weights can be cross-loaded between frameworks. This means that in this case, the names would need to be `network_._0`, `network_._1` and so on. The index should be the position of the layer in the `self.network` list, so that `self.network[i].name == f\"network_._{i}\"`.\r\n\r\nHopefully after all the layers get name arguments correctly, weight loading and cross-loading should just start working! There'll probably be other issues after that, but we should be able to work through them, so feel free to ping me here if you can't figure them out - the core of the port looks quite good!",
"Hi @Rocketknight1, thanks a lot for your input! I printed out the transformed names of the pytorch and tensorflow weights and have a few questions.\r\n\r\nFirst, I didn't find anything regarding the `{name}_._{index}` annotation. When looking at the `state_dict` I find names like `encoder.network.0.blocks.0`, but where did the underscores go?\r\n\r\nAs for the batch norm layers like in the `TFSwiftFormerPatchEmbeddingsSequential`:\r\n```python\r\n self.zero_padding = tf.keras.layers.ZeroPadding2D(padding=(1, 1))\r\n self.conv1 = tf.keras.layers.Conv2D(out_chs // 2, kernel_size=3, strides=2, name=\"0\")\r\n self.batch_norm1 = tf.keras.layers.BatchNormalization(\r\n epsilon=config.batch_norm_eps, momentum=0.9, name=\"1\"\r\n ) # FIXME: is this the equivalent momentum?\r\n self.conv2 = tf.keras.layers.Conv2D(out_chs, kernel_size=3, strides=2, name=\"3\")\r\n self.batch_norm2 = tf.keras.layers.BatchNormalization(\r\n epsilon=config.batch_norm_eps, momentum=0.9, name=\"4\"\r\n ) # FIXME: is the correct momentum\r\n ```\r\n \r\n Looking at the pytorch state dict seems like we get:\r\n ```json\r\n \"patch_embed.patch_embedding.0.weight\": \"patch_embed.patch_embedding.0.weight\",\r\n \"patch_embed.patch_embedding.0.bias\": \"patch_embed.patch_embedding.0.bias\",\r\n \"patch_embed.patch_embedding.1.weight\": \"patch_embed.patch_embedding.1.weight\",\r\n \"patch_embed.patch_embedding.1.bias\": \"patch_embed.patch_embedding.1.bias\",\r\n \"patch_embed.patch_embedding.1.moving_mean\": \"patch_embed.patch_embedding.1.running_mean\",\r\n \"patch_embed.patch_embedding.1.moving_variance\": \"patch_embed.patch_embedding.1.running_var\",\r\n ```\r\n \r\nWhereas in the tensorflow version I only found:\r\n```json\r\n \"patch_embed.patch_embedding.0.weight\",\r\n \"patch_embed.patch_embedding.0.bias\",\r\n \"patch_embed.patch_embedding.1.weight\",\r\n \"patch_embed.patch_embedding.1.bias\",\r\n```\r\n\r\nAre those being associated somewhere else? Especially referring to the moving means and variances.\r\n\r\nThank you!",
"I'm now getting `tests/models/swiftformer/test_modeling_tf_swiftformer.py::TFSwiftFormerModelTest::test_pt_tf_model_equivalence - AssertionError: Tuples differ: (13, 7, 7, 220) != (13, 220, 7, 7)`.\r\n\r\n@amyeroberts you mentioned this last time I asked you regarding model inputs/outputs:\r\n\r\n> This pattern means the model is written in the TF compatible NHWC format throughout, but all of our vision models accept and return images in NCHW.\r\n\r\nMy question is where should I do the transposition? Should I do a for loop after running the encoder and transpose each one of the hidden states? Thank you!",
"@joaocmd Transposing the input (NCHW) to the TF compatible mode (NHWC) should be done in the `ModelNameMainLayer` class e.g. like here for [resnet](https://github.com/huggingface/transformers/blob/e9dbd3926317a4effb1d033d8454ff18280d0b7d/src/transformers/models/resnet/modeling_tf_resnet.py#L337) and then permuted back to NCHW [format before returning](https://github.com/huggingface/transformers/blob/e9dbd3926317a4effb1d033d8454ff18280d0b7d/src/transformers/models/resnet/modeling_tf_resnet.py#L348)"
] | 1,683 | 1,707 | null |
NONE
| null |
# What does this PR do?
Adds the TensorFlow version of the "SwiftFormer".
Fixes #22771
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. https://github.com/huggingface/transformers/issues/22771
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts @D-Roberts
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23342/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23342/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23342",
"html_url": "https://github.com/huggingface/transformers/pull/23342",
"diff_url": "https://github.com/huggingface/transformers/pull/23342.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23342.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23341
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23341/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23341/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23341/events
|
https://github.com/huggingface/transformers/issues/23341
| 1,707,960,019 |
I_kwDOCUB6oc5lzWbT
| 23,341 |
getting AssertionError when using Trainer with `fsdp` and `torch_compile`
|
{
"login": "ouhenio",
"id": 13739349,
"node_id": "MDQ6VXNlcjEzNzM5MzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/13739349?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ouhenio",
"html_url": "https://github.com/ouhenio",
"followers_url": "https://api.github.com/users/ouhenio/followers",
"following_url": "https://api.github.com/users/ouhenio/following{/other_user}",
"gists_url": "https://api.github.com/users/ouhenio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ouhenio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ouhenio/subscriptions",
"organizations_url": "https://api.github.com/users/ouhenio/orgs",
"repos_url": "https://api.github.com/users/ouhenio/repos",
"events_url": "https://api.github.com/users/ouhenio/events{/privacy}",
"received_events_url": "https://api.github.com/users/ouhenio/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"meet same issue....",
"cc @pacman100 ",
"The FSDP wrapper inside `trainer.py` needs to be initialized with `use_orig_params=True` for FSDP + compile to work well together. As of now, that is not the case and there is no flag in the Trainer to make it do so. I can probably find some time later in the week to make a PR, but in any case, that's the issue.",
"Thanks @ani300. I will attempt doing a PR. :blush: ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,683 | 1,686 | 1,686 |
NONE
| null |
### System Info
While trying to train a `GPT2` model using the `Trainer` with `torch_compile` and `fsdp` flags I get the following error:
```bash
AssertionError: Dynamo only supports FSDP with use_orig_params=True
```
I'm using `python==3.10.9`, and initially used `transformers==4.27.X` but after stumbling upon #22279 I updated to `transformers==4.28.1` but the problem persisted.
### Who can help?
I'm guessing this is @sgugger territory and maybe @ani300 could help too. ๐
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
My training file looks like:
```python
def train(cfg: DictConfig):
# Setup data and dataloader
data_module = prepare_data_module(**cfg.data)
train_dataloader = data_module.train_dataloader()
val_dataloader = data_module.val_dataloader()
# Extract tokenizer from datamodule
tokenizer = data_module.tokenizer
# Setup model and optimizer
model = GPT(**cfg.model)
# Setup data collator
data_collator = DataCollatorForLanguageModeling(tokenizer, mlm=False)
# Setup and running training
train_args = TrainingArguments(**cfg.train_args)
trainer = Trainer(
model=model,
tokenizer=tokenizer,
train_args=args,
data_collator=data_collator,
train_dataset=train_dataloader,
eval_dataset=val_dataloader,
)
trainer.train()
```
Here's my full `TrainingArguments` configuration in `.yaml` format:
```yaml
per_device_train_batch_size: 8
per_device_eval_batch_size: 8
evaluation_strategy: "steps"
eval_steps: 2000
logging_steps: 5000
gradient_accumulation_steps: 8
num_train_epochs: 300
weight_decay: 0.1
warmup_steps: 1_000
lr_scheduler_type: "cosine"
learning_rate: 5e-4
save_steps: 25000
bf16: True
torch_compile: True
tf32: True
fsdp: "full_shard auto_wrap"
fsdp_transformer_layer_cls_to_wrap: 'GPT2Block'
```
And I'm running the training using `torchrun --nproc_per_node=8 train.py` with 8 NVIDIA A40.
### Expected behavior
It should run the training process without problems.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23341/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23341/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23340
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23340/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23340/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23340/events
|
https://github.com/huggingface/transformers/issues/23340
| 1,707,907,734 |
I_kwDOCUB6oc5lzJqW
| 23,340 |
Failed to import transformers.pipelines because of the following error (look up to see its traceback): cannot import name 'PartialState' from 'accelerate'
|
{
"login": "Abhranta",
"id": 67365559,
"node_id": "MDQ6VXNlcjY3MzY1NTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/67365559?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Abhranta",
"html_url": "https://github.com/Abhranta",
"followers_url": "https://api.github.com/users/Abhranta/followers",
"following_url": "https://api.github.com/users/Abhranta/following{/other_user}",
"gists_url": "https://api.github.com/users/Abhranta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Abhranta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Abhranta/subscriptions",
"organizations_url": "https://api.github.com/users/Abhranta/orgs",
"repos_url": "https://api.github.com/users/Abhranta/repos",
"events_url": "https://api.github.com/users/Abhranta/events{/privacy}",
"received_events_url": "https://api.github.com/users/Abhranta/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @Abhranta, \r\n\r\nSo that we can best try and help you, could you provide information about the running environment: run `transformers-cli env` in the terminal and copy-paste the output? ",
"Hello I am also having this issue hopefully we are having the same root issue. I am new to python and ML. Here is the output from my `transformers-cli env`:\r\n\r\n```\r\nPS C:\\projects\\poc-chatbot> transformers-cli env\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Mitch\\AppData\\Local\\Programs\\Python\\Python310\\lib\\runpy.py\", line 196, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"C:\\Users\\Mitch\\AppData\\Local\\Programs\\Python\\Python310\\lib\\runpy.py\", line 86, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\Users\\Mitch\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\transformers-cli.exe\\__main__.py\", line 4, in <module>\r\n File \"C:\\Users\\Mitch\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\transformers\\commands\\transformers_cli.py\", line 25, in <module>\r\n from .run import RunCommand\r\n File \"C:\\Users\\Mitch\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\transformers\\commands\\run.py\", line 17, in <module>\r\n from ..pipelines import Pipeline, PipelineDataFormat, get_supported_tasks, pipeline\r\n File \"C:\\Users\\Mitch\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\transformers\\pipelines\\__init__.py\", line 44, in <module>\r\n from .audio_classification import AudioClassificationPipeline\r\n File \"C:\\Users\\Mitch\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\transformers\\pipelines\\audio_classification.py\", line 21, in <module>\r\n from .base import PIPELINE_INIT_ARGS, Pipeline\r\n File \"C:\\Users\\Mitch\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\transformers\\pipelines\\base.py\", line 36, in <module>\r\n from ..modelcard import ModelCard\r\n File \"C:\\Users\\Mitch\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\transformers\\modelcard.py\", line 48, in <module>\r\n from .training_args import ParallelMode\r\n File \"C:\\Users\\Mitch\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\transformers\\training_args.py\", line 67, in <module>\r\n from accelerate import PartialState\r\nImportError: cannot import name 'PartialState' from 'accelerate' (C:\\Users\\Mitch\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\accelerate\\__init__.py)\r\n```",
"Hi @MitchellMonaghan, @Abhranta,\r\n\r\nCould you try upgrading the installed version of accelerate in your env: `pip install -U accelerate`? ",
"Thanks this resolved this error thanks. It upgraded the accelerate package from.\r\n\r\n```\r\nInstalling collected packages: accelerate\r\n Attempting uninstall: accelerate\r\n Found existing installation: accelerate 0.15.0.dev0\r\n Uninstalling accelerate-0.15.0.dev0:\r\n Successfully uninstalled accelerate-0.15.0.dev0\r\nSuccessfully installed accelerate-0.19.0\r\n``` ",
"This Error suddenly pops up in kaggle. Any IDEA!!!!!!!!\r\nI already tried installing accelerate, transformers and datasets as the first line to execute in each notebooks.\r\n\r\nImportError Traceback (most recent call last)\r\nFile /opt/conda/lib/python3.10/site-packages/transformers/utils/import_utils.py:1172, in _LazyModule._get_module(self, module_name)\r\n 1171 try:\r\n-> 1172 return importlib.import_module(\".\" + module_name, self.__name__)\r\n 1173 except Exception as e:\r\n\r\nFile /opt/conda/lib/python3.10/importlib/__init__.py:126, in import_module(name, package)\r\n 125 level += 1\r\n--> 126 return _bootstrap._gcd_import(name[level:], package, level)\r\n\r\nFile <frozen importlib._bootstrap>:1050, in _gcd_import(name, package, level)\r\n\r\nFile <frozen importlib._bootstrap>:1027, in _find_and_load(name, import_)\r\n\r\nFile <frozen importlib._bootstrap>:1006, in _find_and_load_unlocked(name, import_)\r\n\r\nFile <frozen importlib._bootstrap>:688, in _load_unlocked(spec)\r\n\r\nFile <frozen importlib._bootstrap_external>:883, in exec_module(self, module)\r\n\r\nFile <frozen importlib._bootstrap>:241, in _call_with_frames_removed(f, *args, **kwds)\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/transformers/pipelines/__init__.py:44\r\n 35 from ..utils import (\r\n 36 HUGGINGFACE_CO_RESOLVE_ENDPOINT,\r\n 37 is_kenlm_available,\r\n (...)\r\n 42 logging,\r\n 43 )\r\n---> 44 from .audio_classification import AudioClassificationPipeline\r\n 45 from .automatic_speech_recognition import AutomaticSpeechRecognitionPipeline\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/transformers/pipelines/audio_classification.py:21\r\n 20 from ..utils import add_end_docstrings, is_torch_available, logging\r\n---> 21 from .base import PIPELINE_INIT_ARGS, Pipeline\r\n 24 if is_torch_available():\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/transformers/pipelines/base.py:36\r\n 35 from ..image_processing_utils import BaseImageProcessor\r\n---> 36 from ..modelcard import ModelCard\r\n 37 from ..models.auto.configuration_auto import AutoConfig\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/transformers/modelcard.py:48\r\n 32 from .models.auto.modeling_auto import (\r\n 33 MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING_NAMES,\r\n 34 MODEL_FOR_CAUSAL_LM_MAPPING_NAMES,\r\n (...)\r\n 46 MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING_NAMES,\r\n 47 )\r\n---> 48 from .training_args import ParallelMode\r\n 49 from .utils import (\r\n 50 MODEL_CARD_NAME,\r\n 51 cached_file,\r\n (...)\r\n 57 logging,\r\n 58 )\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/transformers/training_args.py:67\r\n 66 if is_accelerate_available():\r\n---> 67 from accelerate import PartialState\r\n 68 from accelerate.utils import DistributedType\r\n\r\nImportError: cannot import name 'PartialState' from 'accelerate' (/opt/conda/lib/python3.10/site-packages/accelerate/__init__.py)\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nRuntimeError Traceback (most recent call last)\r\nFile <timed exec>:2\r\n\r\nFile <frozen importlib._bootstrap>:1075, in _handle_fromlist(module, fromlist, import_, recursive)\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/transformers/utils/import_utils.py:1162, in _LazyModule.__getattr__(self, name)\r\n 1160 value = self._get_module(name)\r\n 1161 elif name in self._class_to_module.keys():\r\n-> 1162 module = self._get_module(self._class_to_module[name])\r\n 1163 value = getattr(module, name)\r\n 1164 else:\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/transformers/utils/import_utils.py:1174, in _LazyModule._get_module(self, module_name)\r\n 1172 return importlib.import_module(\".\" + module_name, self.__name__)\r\n 1173 except Exception as e:\r\n-> 1174 raise RuntimeError(\r\n 1175 f\"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its\"\r\n 1176 f\" traceback):\\n{e}\"\r\n 1177 ) from e\r\n\r\nRuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback):\r\ncannot import name 'PartialState' from 'accelerate' (/opt/conda/lib/python3.10/site-packages/accelerate/__init__.py)",
"Hi @RayGone, \r\n\r\nCould you share the versions of accelerate, transformers and datasets installed and the steps taken / code being run? Have you tried restarting the running notebook session then running the installs? \r\n",
"A week ago now I ran a code and it works fine, now I come to execute it when I import this trainer and TrainingArguments :\r\n `from transformers import Trainer, TrainingArguments`\r\nI get the following error : \r\n\r\n\r\nany help please\r\nby the way i am using kaggle notebook.",
"> Hi @RayGone,\r\n> \r\n> Could you share the versions of accelerate, transformers and datasets installed and the steps taken / code being run? Have you tried restarting the running notebook session then running the installs?\r\n\r\nThis is the [kaggle notebook](https://www.kaggle.com/code/reganmaharjan/bert-2-albert-transfer-learning-nepsa) that i am running.\r\n\r\nThis is the output of installing transformers. (Previously I didn't have to install transformers and datasets; they were already installed)\r\nRequirement already satisfied: transformers[accelerate] in /opt/conda/lib/python3.10/site-packages (4.29.2)\r\n\r\nP.S. Have made notebook public now.\r\n@amyeroberts ",
"> Hi \r\n transformers version : 4.29.2\r\naccelerate version : 0.12.0\r\n\r\nI don't know actually what is the issue",
"You need a more recent version of Accelerate @AzzedineAftiss: `pip install --upgrade accelerate`.",
"> You need a more recent version of Accelerate @AzzedineAftiss: `pip install --upgrade accelerate`.\n\n@sgugger @amyeroberts \nThanks guys for the help \nThat fixes the issue.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Thank you ๐ค so much for the amazing work, making it so easy to try and learn about the best models in the world. ๐ \r\n\r\n---\r\n\r\nRef: https://huggingface.co/blog/falcon\r\n\r\nGot the same error as this issue while running the Falcon tutorial by HuggingFace on Kaggle. Came across this thread via a Google search (not from an LLM yet ๐) and had to make the following changes to get the Falcon tutorial to work on Kaggle notebooks:\r\n\r\n```bash\r\npip install -q --upgrade accelerate einops xformers\r\n```\r\n\r\n- The `accelerate` needs to be upgraded as mentioned in this thread.\r\n- Additional packages in `einops` and `xformers` needs to be installed as well.\r\n\r\nMy Notebook on Kaggle: https://www.kaggle.com/bkowshik/llms-models-falcon\r\n\r\n_NOTE: Had to rerun a couple of times given memory issues on Kaggle, so one needs to keep_ ๐ค \r\n\r\n\r\n```\r\nWrite a poem about India\r\n\r\nA land of mystic and ancient lore,\r\nwhere sacred rivers flow and mountains soar.\r\nIn India, the sun is in a brilliant glow,\r\ncascading the hues that paint the sky like a magical show.\r\n\r\nFrom Kanyakumari to Kashmir,\r\nthe beauty of India never fails to garner.\r\nIts rich cultural heritage with its myriad hues,\r\nand a kaleidoscope of colors, India is blessed.\r\n\r\nTigers roam in the dense forests,\r\ncascading sound of the Ganges, and its gentle whispers.\r\nThe intricate handloom woven sarees in red,\r\na symphony of colors in India's head.\r\n\r\nThe holy pilgrimage of the sacred mountains,\r\nthe golden glow of Diwali, a festival of lights.\r\nIndia is the land of the brave and true,\r\na melting pot of religions, cultures and hues!\r\n```\r\n\r\n---\r\n\r\n@sgugger @amyeroberts Should we can close this issue then?\r\n\r\n<details><summary>Complete logs with warning messages printed as part of the output for reference.</summary>\r\n<p>\r\n\r\n```\r\nDownloading (โฆ)okenizer_config.json: 100%\r\n220/220 [00:00<00:00, 14.6kB/s]\r\nDownloading (โฆ)/main/tokenizer.json:\r\n2.73M/? [00:00<00:00, 6.12MB/s]\r\nDownloading (โฆ)cial_tokens_map.json: 100%\r\n281/281 [00:00<00:00, 22.8kB/s]\r\n/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/__init__.py:98: UserWarning: unable to load libtensorflow_io_plugins.so: unable to open file: libtensorflow_io_plugins.so, from paths: ['/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/libtensorflow_io_plugins.so']\r\ncaused by: ['/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/libtensorflow_io_plugins.so: undefined symbol: _ZN3tsl6StatusC1EN10tensorflow5error4CodeESt17basic_string_viewIcSt11char_traitsIcEENS_14SourceLocationE']\r\n warnings.warn(f\"unable to load libtensorflow_io_plugins.so: {e}\")\r\n/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/__init__.py:104: UserWarning: file system plugins are not loaded: unable to open file: libtensorflow_io.so, from paths: ['/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/libtensorflow_io.so']\r\ncaused by: ['/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/libtensorflow_io.so: undefined symbol: _ZTVN10tensorflow13GcsFileSystemE']\r\n warnings.warn(f\"file system plugins are not loaded: {e}\")\r\nDownloading (โฆ)lve/main/config.json: 100%\r\n667/667 [00:00<00:00, 33.3kB/s]\r\nDownloading (โฆ)/configuration_RW.py:\r\n2.61k/? [00:00<00:00, 165kB/s]\r\nA new version of the following files was downloaded from https://huggingface.co/tiiuae/falcon-7b-instruct:\r\n- configuration_RW.py\r\n. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.\r\nDownloading (โฆ)main/modelling_RW.py:\r\n47.5k/? [00:00<00:00, 2.70MB/s]\r\nA new version of the following files was downloaded from https://huggingface.co/tiiuae/falcon-7b-instruct:\r\n- modelling_RW.py\r\n. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.\r\nDownloading (โฆ)model.bin.index.json:\r\n16.9k/? [00:00<00:00, 850kB/s]\r\nDownloading shards: 100%\r\n2/2 [01:13<00:00, 34.78s/it]\r\nDownloading (โฆ)l-00001-of-00002.bin: 100%\r\n9.95G/9.95G [00:49<00:00, 278MB/s]\r\nDownloading (โฆ)l-00002-of-00002.bin: 100%\r\n4.48G/4.48G [00:24<00:00, 169MB/s]\r\nLoading checkpoint shards: 100%\r\n2/2 [01:15<00:00, 35.25s/it]\r\nDownloading (โฆ)neration_config.json: 100%\r\n111/111 [00:00<00:00, 5.67kB/s]\r\nThe model 'RWForCausalLM' is not supported for text-generation. Supported models are ['BartForCausalLM', 'BertLMHeadModel', 'BertGenerationDecoder', 'BigBirdForCausalLM', 'BigBirdPegasusForCausalLM', 'BioGptForCausalLM', 'BlenderbotForCausalLM', 'BlenderbotSmallForCausalLM', 'BloomForCausalLM', 'CamembertForCausalLM', 'CodeGenForCausalLM', 'CpmAntForCausalLM', 'CTRLLMHeadModel', 'Data2VecTextForCausalLM', 'ElectraForCausalLM', 'ErnieForCausalLM', 'GitForCausalLM', 'GPT2LMHeadModel', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTNeoForCausalLM', 'GPTNeoXForCausalLM', 'GPTNeoXJapaneseForCausalLM', 'GPTJForCausalLM', 'LlamaForCausalLM', 'MarianForCausalLM', 'MBartForCausalLM', 'MegaForCausalLM', 'MegatronBertForCausalLM', 'MvpForCausalLM', 'OpenLlamaForCausalLM', 'OpenAIGPTLMHeadModel', 'OPTForCausalLM', 'PegasusForCausalLM', 'PLBartForCausalLM', 'ProphetNetForCausalLM', 'QDQBertLMHeadModel', 'ReformerModelWithLMHead', 'RemBertForCausalLM', 'RobertaForCausalLM', 'RobertaPreLayerNormForCausalLM', 'RoCBertForCausalLM', 'RoFormerForCausalLM', 'RwkvForCausalLM', 'Speech2Text2ForCausalLM', 'TransfoXLLMHeadModel', 'TrOCRForCausalLM', 'XGLMForCausalLM', 'XLMWithLMHeadModel', 'XLMProphetNetForCausalLM', 'XLMRobertaForCausalLM', 'XLMRobertaXLForCausalLM', 'XLNetLMHeadModel', 'XmodForCausalLM'].\r\n```\r\n</p>\r\n</details> \r\n",
"hi @amyeroberts, can you help me with this?\r\ni am trying to import pipline from transformers in Kaggle Notebook. but i get the following error:\r\n\r\n\r\n\r\n",
"Thanks @bkowshik it worked!",
"@sgugger @amyeroberts as annoying as it is, but pipeline in kaggle is not working as seen in [screenshot](https://github.com/huggingface/transformers/issues/23340#issuecomment-1609261696) above. \r\n\r\nIt didn't work even when i did this:\r\n`!pip install transformers tokenizers datasets huggingface_hub --upgrade -q`\r\n`!pip install accelerator --upgrade -q`",
"@RayGone Have posted details of the fix here: https://github.com/huggingface/transformers/issues/23340#issuecomment-1606719159",
"> @RayGone Have posted details of the fix here: https://github.com/huggingface/transformers/issues/23340#issuecomment-1606719159\n\nThanks, will try that.\nDidn't try that because i wasn't using xformers directly. But i guess its used by some other dependecy. ",
"I know this is not the exact place for this issue. but somebody help me or direct me to correct place.\r\n\r\nI'm getting this error:\r\n`RuntimeError: Unrecognized array dtype object. \r\nNested types and image/audio types are not supported yet.`\r\n\r\nThis happens when i call `model.prepare_tf_dataset`.\r\nThe whole code is basically what is given in the text-classification section of NLP course.\r\n@bkowshik @sgugger @amyeroberts ",
"cc @Rocketknight1 ",
"If you're using Kaggle, make sure that the environment variable is not pinned, this fixed it for me:\r\n\r\n\r\n",
"> Hi @MitchellMonaghan, @Abhranta,\r\n> \r\n> Could you try upgrading the installed version of accelerate in your env: `pip install -U accelerate`?\r\n\r\nWorked for me.",
"@Abhranta \r\nCan you please take a look at my case ?\r\n\r\n\r\n```console\r\nโ transformers-cli env\r\n2023-08-07 20:40:36.749893: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9346] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\r\n2023-08-07 20:40:36.749932: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\r\n2023-08-07 20:40:36.749940: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\r\n2023-08-07 20:40:36.754844: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\r\nTo enable the following instructions: SSE3 SSE4.1 SSE4.2 AVX AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\r\nWARNING:tensorflow:From ~/.local/lib/python3.10/site-packages/tensorflow/python/ops/distributions/distribution.py:259: ReparameterizationType.__init__ (from tensorflow.python.ops.distributions.distribution) is deprecated and will be removed after 2019-01-01.\r\nInstructions for updating:\r\nThe TensorFlow Distributions library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). You should update all references to use `tfp.distributions` instead of `tf.distributions`.\r\nWARNING:tensorflow:From ~/.local/lib/python3.10/site-packages/tensorflow/python/ops/distributions/bernoulli.py:165: RegisterKL.__init__ (from tensorflow.python.ops.distributions.kullback_leibler) is deprecated and will be removed after 2019-01-01.\r\nInstructions for updating:\r\nThe TensorFlow Distributions library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). You should update all references to use `tfp.distributions` instead of `tf.distributions`.\r\nTraceback (most recent call last):\r\n File \"~/.local/bin/transformers-cli\", line 33, in <module>\r\n sys.exit(load_entry_point('transformers==4.31.0', 'console_scripts', 'transformers-cli')())\r\n File \"~/.local/bin/transformers-cli\", line 25, in importlib_load_entry_point\r\n return next(matches).load()\r\n File \"/usr/lib/python3.10/importlib/metadata/__init__.py\", line 171, in load\r\n module = import_module(match.group('module'))\r\n File \"/usr/lib/python3.10/importlib/__init__.py\", line 126, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1050, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 1027, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 1006, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 688, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 883, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 241, in _call_with_frames_removed\r\n File \"~/.local/lib/python3.10/site-packages/transformers/commands/transformers_cli.py\", line 25, in <module>\r\n from .run import RunCommand\r\n File \"~/.local/lib/python3.10/site-packages/transformers/commands/run.py\", line 17, in <module>\r\n from ..pipelines import Pipeline, PipelineDataFormat, get_supported_tasks, pipeline\r\n File \"~/.local/lib/python3.10/site-packages/transformers/pipelines/__init__.py\", line 69, in <module>\r\n from .table_question_answering import TableQuestionAnsweringArgumentHandler, TableQuestionAnsweringPipeline\r\n File \"~/.local/lib/python3.10/site-packages/transformers/pipelines/table_question_answering.py\", line 26, in <module>\r\n import tensorflow_probability as tfp\r\n File \"~/.local/lib/python3.10/site-packages/tensorflow_probability/__init__.py\", line 20, in <module>\r\n from tensorflow_probability import substrates\r\n File \"~/.local/lib/python3.10/site-packages/tensorflow_probability/substrates/__init__.py\", line 17, in <module>\r\n from tensorflow_probability.python.internal import all_util\r\n File \"~/.local/lib/python3.10/site-packages/tensorflow_probability/python/__init__.py\", line 138, in <module>\r\n dir(globals()[pkg_name]) # Forces loading the package from its lazy loader.\r\n File \"~/.local/lib/python3.10/site-packages/tensorflow_probability/python/internal/lazy_loader.py\", line 57, in __dir__\r\n module = self._load()\r\n File \"~/.local/lib/python3.10/site-packages/tensorflow_probability/python/internal/lazy_loader.py\", line 40, in _load\r\n module = importlib.import_module(self.__name__)\r\n File \"/usr/lib/python3.10/importlib/__init__.py\", line 126, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"~/.local/lib/python3.10/site-packages/tensorflow_probability/python/experimental/__init__.py\", line 31, in <module>\r\n from tensorflow_probability.python.experimental import bayesopt\r\n File \"~/.local/lib/python3.10/site-packages/tensorflow_probability/python/experimental/bayesopt/__init__.py\", line 17, in <module>\r\n from tensorflow_probability.python.experimental.bayesopt import acquisition\r\n File \"~/.local/lib/python3.10/site-packages/tensorflow_probability/python/experimental/bayesopt/acquisition/__init__.py\", line 17, in <module>\r\n from tensorflow_probability.python.experimental.bayesopt.acquisition.acquisition_function import AcquisitionFunction\r\n File \"~/.local/lib/python3.10/site-packages/tensorflow_probability/python/experimental/bayesopt/acquisition/acquisition_function.py\", line 22, in <module>\r\n from tensorflow_probability.python.internal import prefer_static as ps\r\n File \"~/.local/lib/python3.10/site-packages/tensorflow_probability/python/internal/prefer_static.py\", line 361, in <module>\r\n ones_like = _copy_docstring(tf.ones_like, _ones_like)\r\n File \"~/.local/lib/python3.10/site-packages/tensorflow_probability/python/internal/prefer_static.py\", line 84, in _copy_docstring\r\n raise ValueError(\r\nValueError: Arg specs do not match: original=FullArgSpec(args=['input', 'dtype', 'name', 'layout'], varargs=None, varkw=None, defaults=(None, None, None), kwonlyargs=[], kwonlydefaults=None, annotations={}), new=FullArgSpec(args=['input', 'dtype', 'name'], varargs=None, varkw=None, defaults=(None, None), kwonlyargs=[], kwonlydefaults=None, annotations={}), fn=<function ones_like_v2 at 0x7fba7beb8a60>\r\n```",
"> @Abhranta Can you please take a look at my case ?\r\n> \r\n> ```\r\n> โ transformers-cli env\r\n> 2023-08-07 20:40:36.749893: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9346] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\r\n> 2023-08-07 20:40:36.749932: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\r\n> 2023-08-07 20:40:36.749940: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\r\n> 2023-08-07 20:40:36.754844: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\r\n> To enable the following instructions: SSE3 SSE4.1 SSE4.2 AVX AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\r\n> WARNING:tensorflow:From ~/.local/lib/python3.10/site-packages/tensorflow/python/ops/distributions/distribution.py:259: ReparameterizationType.__init__ (from tensorflow.python.ops.distributions.distribution) is deprecated and will be removed after 2019-01-01.\r\n> Instructions for updating:\r\n> The TensorFlow Distributions library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). You should update all references to use `tfp.distributions` instead of `tf.distributions`.\r\n> WARNING:tensorflow:From ~/.local/lib/python3.10/site-packages/tensorflow/python/ops/distributions/bernoulli.py:165: RegisterKL.__init__ (from tensorflow.python.ops.distributions.kullback_leibler) is deprecated and will be removed after 2019-01-01.\r\n> Instructions for updating:\r\n> The TensorFlow Distributions library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). You should update all references to use `tfp.distributions` instead of `tf.distributions`.\r\n> Traceback (most recent call last):\r\n> File \"~/.local/bin/transformers-cli\", line 33, in <module>\r\n> sys.exit(load_entry_point('transformers==4.31.0', 'console_scripts', 'transformers-cli')())\r\n> File \"~/.local/bin/transformers-cli\", line 25, in importlib_load_entry_point\r\n> return next(matches).load()\r\n> File \"/usr/lib/python3.10/importlib/metadata/__init__.py\", line 171, in load\r\n> module = import_module(match.group('module'))\r\n> File \"/usr/lib/python3.10/importlib/__init__.py\", line 126, in import_module\r\n> return _bootstrap._gcd_import(name[level:], package, level)\r\n> File \"<frozen importlib._bootstrap>\", line 1050, in _gcd_import\r\n> File \"<frozen importlib._bootstrap>\", line 1027, in _find_and_load\r\n> File \"<frozen importlib._bootstrap>\", line 1006, in _find_and_load_unlocked\r\n> File \"<frozen importlib._bootstrap>\", line 688, in _load_unlocked\r\n> File \"<frozen importlib._bootstrap_external>\", line 883, in exec_module\r\n> File \"<frozen importlib._bootstrap>\", line 241, in _call_with_frames_removed\r\n> File \"~/.local/lib/python3.10/site-packages/transformers/commands/transformers_cli.py\", line 25, in <module>\r\n> from .run import RunCommand\r\n> File \"~/.local/lib/python3.10/site-packages/transformers/commands/run.py\", line 17, in <module>\r\n> from ..pipelines import Pipeline, PipelineDataFormat, get_supported_tasks, pipeline\r\n> File \"~/.local/lib/python3.10/site-packages/transformers/pipelines/__init__.py\", line 69, in <module>\r\n> from .table_question_answering import TableQuestionAnsweringArgumentHandler, TableQuestionAnsweringPipeline\r\n> File \"~/.local/lib/python3.10/site-packages/transformers/pipelines/table_question_answering.py\", line 26, in <module>\r\n> import tensorflow_probability as tfp\r\n> File \"~/.local/lib/python3.10/site-packages/tensorflow_probability/__init__.py\", line 20, in <module>\r\n> from tensorflow_probability import substrates\r\n> File \"~/.local/lib/python3.10/site-packages/tensorflow_probability/substrates/__init__.py\", line 17, in <module>\r\n> from tensorflow_probability.python.internal import all_util\r\n> File \"~/.local/lib/python3.10/site-packages/tensorflow_probability/python/__init__.py\", line 138, in <module>\r\n> dir(globals()[pkg_name]) # Forces loading the package from its lazy loader.\r\n> File \"~/.local/lib/python3.10/site-packages/tensorflow_probability/python/internal/lazy_loader.py\", line 57, in __dir__\r\n> module = self._load()\r\n> File \"~/.local/lib/python3.10/site-packages/tensorflow_probability/python/internal/lazy_loader.py\", line 40, in _load\r\n> module = importlib.import_module(self.__name__)\r\n> File \"/usr/lib/python3.10/importlib/__init__.py\", line 126, in import_module\r\n> return _bootstrap._gcd_import(name[level:], package, level)\r\n> File \"~/.local/lib/python3.10/site-packages/tensorflow_probability/python/experimental/__init__.py\", line 31, in <module>\r\n> from tensorflow_probability.python.experimental import bayesopt\r\n> File \"~/.local/lib/python3.10/site-packages/tensorflow_probability/python/experimental/bayesopt/__init__.py\", line 17, in <module>\r\n> from tensorflow_probability.python.experimental.bayesopt import acquisition\r\n> File \"~/.local/lib/python3.10/site-packages/tensorflow_probability/python/experimental/bayesopt/acquisition/__init__.py\", line 17, in <module>\r\n> from tensorflow_probability.python.experimental.bayesopt.acquisition.acquisition_function import AcquisitionFunction\r\n> File \"~/.local/lib/python3.10/site-packages/tensorflow_probability/python/experimental/bayesopt/acquisition/acquisition_function.py\", line 22, in <module>\r\n> from tensorflow_probability.python.internal import prefer_static as ps\r\n> File \"~/.local/lib/python3.10/site-packages/tensorflow_probability/python/internal/prefer_static.py\", line 361, in <module>\r\n> ones_like = _copy_docstring(tf.ones_like, _ones_like)\r\n> File \"~/.local/lib/python3.10/site-packages/tensorflow_probability/python/internal/prefer_static.py\", line 84, in _copy_docstring\r\n> raise ValueError(\r\n> ValueError: Arg specs do not match: original=FullArgSpec(args=['input', 'dtype', 'name', 'layout'], varargs=None, varkw=None, defaults=(None, None, None), kwonlyargs=[], kwonlydefaults=None, annotations={}), new=FullArgSpec(args=['input', 'dtype', 'name'], varargs=None, varkw=None, defaults=(None, None), kwonlyargs=[], kwonlydefaults=None, annotations={}), fn=<function ones_like_v2 at 0x7fba7beb8a60>\r\n> ```\r\n\r\nSorry, problem solved by manually build [tensorflow-probability](https://github.com/tensorflow/probability).\r\n\r\n```console\r\nโ ~ transformers-cli env\r\n2023-08-07 21:04:43.751890: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9346] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\r\n2023-08-07 21:04:43.751926: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\r\n2023-08-07 21:04:43.751932: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\r\nWARNING:tensorflow:From ~/.local/lib/python3.10/site-packages/tensorflow/python/ops/distributions/distribution.py:259: ReparameterizationType.__init__ (from tensorflow.python.ops.distributions.distribution) is deprecated and will be removed after 2019-01-01.\r\nInstructions for updating:\r\nThe TensorFlow Distributions library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). You should update all references to use `tfp.distributions` instead of `tf.distributions`.\r\nWARNING:tensorflow:From ~/.local/lib/python3.10/site-packages/tensorflow/python/ops/distributions/bernoulli.py:165: RegisterKL.__init__ (from tensorflow.python.ops.distributions.kullback_leibler) is deprecated and will be removed after 2019-01-01.\r\nInstructions for updating:\r\nThe TensorFlow Distributions library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). You should update all references to use `tfp.distributions` instead of `tf.distributions`.\r\nWARNING:tensorflow:From ~/.local/lib/python3.10/site-packages/transformers/commands/env.py:100: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.\r\nInstructions for updating:\r\nUse `tf.config.list_physical_devices('GPU')` instead.\r\n\r\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two last points.\r\n\r\n- `transformers` version: 4.31.0\r\n- Platform: Linux-6.2.0-26-generic-x86_64-with-glibc2.35\r\n- Python version: 3.10.12\r\n- Huggingface_hub version: 0.16.4\r\n- Safetensors version: 0.3.1\r\n- Accelerate version: 0.21.0\r\n- Accelerate config: \tnot found\r\n- PyTorch version (GPU?): 2.1.0a0+gitaaa989c (True)\r\n- Tensorflow version (GPU?): 2.15.0 (True)\r\n- Flax version (CPU?/GPU?/TPU?): 0.7.1 (gpu)\r\n- Jax version: 0.4.14\r\n- JaxLib version: 0.4.14\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n\r\nโ ~ \r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi, I am getting this issue while importing Trainer\r\n\r\n`RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback):\r\nCan't instantiate abstract class MPS_Accelerator with abstract methods supported_dtypes\r\n`\r\nRunning transformers-cli env gives the following output:\r\n\r\n`Traceback (most recent call last):\r\n File \"/Users/shashankgupta/.conda/envs/deepspeed/bin/transformers-cli\", line 5, in <module>\r\n from transformers.commands.transformers_cli import main\r\n File \"/Users/shashankgupta/.conda/envs/deepspeed/lib/python3.8/site-packages/transformers/commands/transformers_cli.py\", line 25, in <module>\r\n from .run import RunCommand\r\n File \"/Users/shashankgupta/.conda/envs/deepspeed/lib/python3.8/site-packages/transformers/commands/run.py\", line 17, in <module>\r\n from ..pipelines import Pipeline, PipelineDataFormat, get_supported_tasks, pipeline\r\n File \"/Users/shashankgupta/.conda/envs/deepspeed/lib/python3.8/site-packages/transformers/pipelines/__init__.py\", line 46, in <module>\r\n from .audio_classification import AudioClassificationPipeline\r\n File \"/Users/shashankgupta/.conda/envs/deepspeed/lib/python3.8/site-packages/transformers/pipelines/audio_classification.py\", line 21, in <module>\r\n from .base import PIPELINE_INIT_ARGS, Pipeline\r\n File \"/Users/shashankgupta/.conda/envs/deepspeed/lib/python3.8/site-packages/transformers/pipelines/base.py\", line 34, in <module>\r\n from ..modelcard import ModelCard\r\n File \"/Users/shashankgupta/.conda/envs/deepspeed/lib/python3.8/site-packages/transformers/modelcard.py\", line 48, in <module>\r\n from .training_args import ParallelMode\r\n File \"/Users/shashankgupta/.conda/envs/deepspeed/lib/python3.8/site-packages/transformers/training_args.py\", line 69, in <module>\r\n from accelerate.state import AcceleratorState, PartialState\r\n File \"/Users/shashankgupta/.conda/envs/deepspeed/lib/python3.8/site-packages/accelerate/__init__.py\", line 3, in <module>\r\n from .accelerator import Accelerator\r\n File \"/Users/shashankgupta/.conda/envs/deepspeed/lib/python3.8/site-packages/accelerate/accelerator.py\", line 35, in <module>\r\n from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state\r\n File \"/Users/shashankgupta/.conda/envs/deepspeed/lib/python3.8/site-packages/accelerate/checkpointing.py\", line 24, in <module>\r\n from .utils import (\r\n File \"/Users/shashankgupta/.conda/envs/deepspeed/lib/python3.8/site-packages/accelerate/utils/__init__.py\", line 136, in <module>\r\n from .launch import (\r\n File \"/Users/shashankgupta/.conda/envs/deepspeed/lib/python3.8/site-packages/accelerate/utils/launch.py\", line 33, in <module>\r\n from ..utils.other import is_port_in_use, merge_dicts\r\n File \"/Users/shashankgupta/.conda/envs/deepspeed/lib/python3.8/site-packages/accelerate/utils/other.py\", line 32, in <module>\r\n from deepspeed import DeepSpeedEngine\r\n File \"/Users/shashankgupta/.conda/envs/deepspeed/lib/python3.8/site-packages/deepspeed/__init__.py\", line 21, in <module>\r\n from . import ops\r\n File \"/Users/shashankgupta/.conda/envs/deepspeed/lib/python3.8/site-packages/deepspeed/ops/__init__.py\", line 6, in <module>\r\n from . import adam\r\n File \"/Users/shashankgupta/.conda/envs/deepspeed/lib/python3.8/site-packages/deepspeed/ops/adam/__init__.py\", line 6, in <module>\r\n from .cpu_adam import DeepSpeedCPUAdam\r\n File \"/Users/shashankgupta/.conda/envs/deepspeed/lib/python3.8/site-packages/deepspeed/ops/adam/cpu_adam.py\", line 8, in <module>\r\n from deepspeed.utils import logger\r\n File \"/Users/shashankgupta/.conda/envs/deepspeed/lib/python3.8/site-packages/deepspeed/utils/__init__.py\", line 10, in <module>\r\n from .groups import *\r\n File \"/Users/shashankgupta/.conda/envs/deepspeed/lib/python3.8/site-packages/deepspeed/utils/groups.py\", line 28, in <module>\r\n from deepspeed import comm as dist\r\n File \"/Users/shashankgupta/.conda/envs/deepspeed/lib/python3.8/site-packages/deepspeed/comm/__init__.py\", line 7, in <module>\r\n from .comm import *\r\n File \"/Users/shashankgupta/.conda/envs/deepspeed/lib/python3.8/site-packages/deepspeed/comm/comm.py\", line 34, in <module>\r\n from deepspeed.utils import timer, get_caller_func\r\n File \"/Users/shashankgupta/.conda/envs/deepspeed/lib/python3.8/site-packages/deepspeed/utils/timer.py\", line 31, in <module>\r\n class CudaEventTimer(object):\r\n File \"/Users/shashankgupta/.conda/envs/deepspeed/lib/python3.8/site-packages/deepspeed/utils/timer.py\", line 33, in CudaEventTimer\r\n def __init__(self, start_event: get_accelerator().Event, end_event: get_accelerator().Event):\r\n File \"/Users/shashankgupta/.conda/envs/deepspeed/lib/python3.8/site-packages/deepspeed/accelerator/real_accelerator.py\", line 155, in get_accelerator\r\n ds_accelerator = MPS_Accelerator()\r\nTypeError: Can't instantiate abstract class MPS_Accelerator with abstract methods supported_dtypes\r\n`\r\n\r\nVersions I am using: \r\n\r\n```\r\naccelerate 0.22.0\r\neinops 0.6.1\r\ntorch 2.0.1\r\ntransformers 4.33.0\r\n\r\n```",
"Hi @shashank140195, could you open a new issue? Please make sure the errors are properly formatted in multi-line markdown code formatting i.e. between a pair of three backticks ` ``` Traceback goes here ``` `",
"Hi, I am getting this error while executing the file. I have tried all the possible solution but not to get the solution as of now. If anyone knows please help me out of this.\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\kapil\\AppData\\Local\\Temp\\_MEI637282\\transformers\\utils\\import_utils.py\", line 1184, in _get_module\r\n return importlib.import_module(\".\" + module_name, self.__name__)\r\n File \"importlib\\__init__.py\", line 126, in import_module\r\n File \"<frozen importlib._bootstrap>\", line 1050, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 1027, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 1006, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 688, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 883, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 241, in _call_with_frames_removed\r\n File \"C:\\Users\\kapil\\AppData\\Local\\Temp\\_MEI637282\\transformers\\pipelines\\__init__.py\", line 62, in <module>\r\n from .document_question_answering import DocumentQuestionAnsweringPipeline\r\n File \"<frozen importlib._bootstrap>\", line 1027, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 1006, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 688, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 883, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 241, in _call_with_frames_removed\r\n File \"C:\\Users\\kapil\\AppData\\Local\\Temp\\_MEI637282\\transformers\\pipelines\\document_question_answering.py\", line 29, in <module>\r\n from .question_answering import select_starts_ends\r\n File \"<frozen importlib._bootstrap>\", line 1027, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 1006, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 688, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 883, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 241, in _call_with_frames_removed\r\n File \"C:\\Users\\kapil\\AppData\\Local\\Temp\\_MEI637282\\transformers\\pipelines\\question_answering.py\", line 9, in <module>\r\n from ..data import SquadExample, SquadFeatures, squad_convert_examples_to_features\r\n File \"<frozen importlib._bootstrap>\", line 1027, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 1006, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 688, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 883, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 241, in _call_with_frames_removed\r\n File \"C:\\Users\\kapil\\AppData\\Local\\Temp\\_MEI637282\\transformers\\data\\__init__.py\", line 15, in <module>\r\n from .data_collator import (\r\n File \"<frozen importlib._bootstrap>\", line 1027, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 1006, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 688, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 883, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 241, in _call_with_frames_removed\r\n File \"C:\\Users\\kapil\\AppData\\Local\\Temp\\_MEI637282\\transformers\\data\\data_collator.py\", line 24, in <module>\r\n from ..models.bert import BertTokenizer, BertTokenizerFast\r\nImportError: cannot import name 'BertTokenizerFast' from 'transformers.models.bert' (C:\\Users\\kapil\\AppData\\Local\\Temp\\_MEI637282\\transformers\\models\\bert\\__init__.py)\r\n\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\n Traceback (most recent call last):\r\n File \"v2n.py\", line 54, in <module>\r\n File \"<frozen importlib._bootstrap>\", line 1027, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 1006, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 688, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 883, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 241, in _call_with_frames_removed\r\n File \"C:\\Users\\kapil\\AppData\\Local\\Temp\\_MEI637282\\whisperx\\__init__.py\", line 1, in <module>\r\n from .transcribe import load_model\r\n File \"<frozen importlib._bootstrap>\", line 1027, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 1006, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 688, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 883, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 241, in _call_with_frames_removed\r\n File \"C:\\Users\\kapil\\AppData\\Local\\Temp\\_MEI637282\\whisperx\\transcribe.py\", line 10, in <module>\r\n from .asr import load_model\r\n File \"<frozen importlib._bootstrap>\", line 1027, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 1006, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 688, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 883, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 241, in _call_with_frames_removed\r\n File \"C:\\Users\\kapil\\AppData\\Local\\Temp\\_MEI637282\\whisperx\\asr.py\", line 9, in <module>\r\n from transformers import Pipeline\r\n File \"<frozen importlib._bootstrap>\", line 1075, in _handle_fromlist\r\n File \"C:\\Users\\kapil\\AppData\\Local\\Temp\\_MEI637282\\transformers\\utils\\import_utils.py\", line 1174, in __getattr__\r\n module = self._get_module(self._class_to_module[name])\r\n File \"C:\\Users\\kapil\\AppData\\Local\\Temp\\_MEI637282\\transformers\\utils\\import_utils.py\", line 1186, in _get_module\r\n raise RuntimeError(\r\nRuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback):\r\ncannot import name 'BertTokenizerFast' from 'transformers.models.bert' (C:\\Users\\kapil\\AppData\\Local\\Temp\\_MEI637282\\transformers\\models\\bert\\__init__.py)",
"hey @kknagi could you share which transformers version you are using?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,683 | 1,708 | 1,698 |
NONE
| null |
### System Info
I am trying to import Segment Anything Model (SAM) using transformers pipeline. But this gives the following error :
"
RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback):
cannot import name 'PartialState' from 'accelerate' (/opt/conda/lib/python3.10/site-packages/accelerate/__init__.py)"
What i am trying to do :
"
from transformers import pipeline
generator = pipeline("mask-generation", model="facebook/sam-vit-huge", device=0)
"
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
import this line:
from transformers import pipeline
generator = pipeline("mask-generation", model="facebook/sam-vit-huge", device=0)
### Expected behavior
The model should import as per this notebook in official tutorials:
https://github.com/huggingface/notebooks/blob/main/examples/automatic_mask_generation.ipynb
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23340/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23340/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23339
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23339/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23339/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23339/events
|
https://github.com/huggingface/transformers/pull/23339
| 1,707,886,538 |
PR_kwDOCUB6oc5QZs8p
| 23,339 |
Use cu118 with cudnn >= 8.6 in docker file
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"We have the so called past CI which runs previous torch and tensorflow versions together the environment we set for them. \n\nIn this particular case, since torch version is not changed but using cu118 file, we don't really have any extra CI for torch 2.0 with cu117 after this PR. So no real promise. But this is already the case for our CI, we always fix a cuda and cudnn environment until we really have to change ๐"
] | 1,683 | 1,683 | 1,683 |
COLLABORATOR
| null |
# What does this PR do?
We use TF 2.12 after #22759 and #23293. But TF 2.12 requires CUDA 11.8 and CUDNN 8.6 (or up) to work.
Currently, our CI have errors with
```bash
Loaded runtime CuDNN library: 8.5.0 but source was compiled with: 8.6.0. CuDNN library needs to have matching major version and equal or higher minor version. If using a binary install, upgrade your CuDNN library. If building from sources, make sure the library loaded at runtime is compatible with the version specified during compile configuration.
`UNIMPLEMENTED: DNN library is not found.`.
```
This PR uses new base image for some docker files. We also have to use `cu118` for the torch installation with this new base image.
Other docker files (those with deepspeed stuff) are not changed in this PR - better to see what happens with this change and apply to other files.
Running some previous failing tests and they pass now. Still need to watch if the whole suite (doctest) pass on Monday.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23339/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23339",
"html_url": "https://github.com/huggingface/transformers/pull/23339",
"diff_url": "https://github.com/huggingface/transformers/pull/23339.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23339.patch",
"merged_at": 1683921495000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23338
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23338/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23338/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23338/events
|
https://github.com/huggingface/transformers/issues/23338
| 1,707,846,911 |
I_kwDOCUB6oc5ly6z_
| 23,338 |
AutoTokenizer registration not working as expected
|
{
"login": "william-cerebras",
"id": 69158333,
"node_id": "MDQ6VXNlcjY5MTU4MzMz",
"avatar_url": "https://avatars.githubusercontent.com/u/69158333?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/william-cerebras",
"html_url": "https://github.com/william-cerebras",
"followers_url": "https://api.github.com/users/william-cerebras/followers",
"following_url": "https://api.github.com/users/william-cerebras/following{/other_user}",
"gists_url": "https://api.github.com/users/william-cerebras/gists{/gist_id}",
"starred_url": "https://api.github.com/users/william-cerebras/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/william-cerebras/subscriptions",
"organizations_url": "https://api.github.com/users/william-cerebras/orgs",
"repos_url": "https://api.github.com/users/william-cerebras/repos",
"events_url": "https://api.github.com/users/william-cerebras/events{/privacy}",
"received_events_url": "https://api.github.com/users/william-cerebras/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I was able to reproduce the error. I will take a look at this problem.",
"Hi @william-cerebras @ArthurZucker! \r\nI have done some quick debugging and the problem is really simple. In `configuration_auto.py` we have `OrderedDict` `CONFIG_MAPPING_NAMES` as well as `CONFIG_MAPPING = _LazyConfigMapping(CONFIG_MAPPING_NAMES)`. When we register new config we are calling `AutoConfig.register` which underneath updates `CONFIG_MAPPING` by calling `CONFIG_MAPPING.register`. `CONFIG_MAPPING` is an instance of `_LazyConfigMapping` class which has two variables `self._mapping` which is just `CONFIG_MAPPING_NAMES` as well as `self._extra_content`. When `CONFIG_MAPPING.register` is being called it is updating `self._extra_content` dict and not `self._mapping` itself. But later in the code when we call `AutoTokenizer.from_pretrained` we search for config class name to convert it into corresponding model type. It is done inside `config_class_to_model_type` function. Now to the clue, inside this function we iterate over `CONFIG_MAPPING_NAMES` which as I pointed out earlier is not being updated while registering new config. So in order to make it work we can do a simple fix: \r\nChange this:\r\n```python\r\ndef config_class_to_model_type(config):\r\n \"\"\"Converts a config class name to the corresponding model type\"\"\"\r\n for key, cls in CONFIG_MAPPING_NAMES.items():\r\n if cls == config:\r\n return key\r\n return None\r\n```\r\nto this:\r\n```python\r\ndef config_class_to_model_type(config):\r\n \"\"\"Converts a config class name to the corresponding model type\"\"\"\r\n for key, cls in CONFIG_MAPPING.items():\r\n if cls.__name__ == config:\r\n return key\r\n return None\r\n```\r\nTo make it clear why this will work is because when we call `CONFIG_MAPPING.items()` it underneath merges `self._mapping` and `self._extra_content` and as a result includes newly registered config:\r\n```python\r\ndef items(self):\r\n return [(k, self[k]) for k in self._mapping.keys()] + list(self._extra_content.items())\r\n```",
"cc @sgugger ",
"Thanks for the ping @Bearnardd. The change you suggest is a bit too strong as it will actually go import everything config to build `CONFIG_MAPPING.items()`. I think we can keep the check on the names as it is, then add a check looping `CONFIG_MAPPING_extra_content` while keeping the spirit of your fix (if that makes sense).\r\n\r\nI can work on that or you can open a PR if you prefer @Bearnardd ",
"Hi @sgugger! I will fix that",
"Thanks for the fix @Bearnardd @sgugger!"
] | 1,683 | 1,685 | 1,685 |
NONE
| null |
### System Info
- `transformers` version: 4.29.1
- Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.2
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 1.13.1+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker this seems to be an issue with `AutoTokenizer` registration, so maybe you're the right person to take a look?
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
- Create a directory containing the `config.json`, `pytorch_model.bin`, and `tokenizer.json` from [gpt2](https://huggingface.co/gpt2/tree/main)
- Set `"model_type": "example"` in the `config.json`
- Run the following code excerpt, replacing the dummy path with the path to the prepared directory
```python
from transformers import (
GPT2LMHeadModel,
GPT2Config,
AutoConfig,
AutoModelForCausalLM,
AutoTokenizer,
)
from transformers.tokenization_utils_fast import PreTrainedTokenizerFast
from transformers.models.auto.tokenization_auto import (
CONFIG_MAPPING_NAMES, TOKENIZER_MAPPING
)
from tokenizers import Tokenizer
class ExampleConfig(GPT2Config):
model_type = "example"
class ExampleModel(GPT2LMHeadModel):
config_class = ExampleConfig
ExampleTokenizer = PreTrainedTokenizerFast
AutoConfig.register("example", ExampleConfig)
AutoModelForCausalLM.register(ExampleConfig, ExampleModel)
AutoTokenizer.register(ExampleConfig, fast_tokenizer_class=ExampleTokenizer)
print(", ".join(c.__name__ for c in TOKENIZER_MAPPING))
print(CONFIG_MAPPING_NAMES)
pretrain_path = "/path/to/downloaded/artifacts"
config = AutoConfig.from_pretrained(pretrain_path) # This works just fine
model = AutoModelForCausalLM.from_pretrained(pretrain_path) # This works just fine
tokenizer = AutoTokenizer.from_pretrained(pretrain_path) # This throws an exception
```
A few things to note about the behavior of this script. First, as noted in the comments, loading the `AutoTokenizer` throws an error. The error message I see is
```
Traceback (most recent call last):
File "[base_path]/autotokenizer_reproducer.py", line 36, in <module>
tokenizer = AutoTokenizer.from_pretrained(pretrain_path) # This throws an exception
File "[path_to_conda_env]/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 721, in from_pretrained
raise ValueError(
ValueError: Unrecognized configuration class <class '__main__.ExampleConfig'> to build an AutoTokenizer.
Model type should be one of AlbertConfig, AlignConfig, BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BlipConfig, Blip2Config, BloomConfig, BridgeTowerConfig, CamembertConfig, CanineConfig, ChineseCLIPConfig, ClapConfig, CLIPConfig, CLIPSegConfig, CodeGenConfig, ConvBertConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, DebertaConfig, DebertaV2Config, DistilBertConfig, DPRConfig, ElectraConfig, ErnieConfig, ErnieMConfig, EsmConfig, FlaubertConfig, FNetConfig, FSMTConfig, FunnelConfig, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, GPTSanJapaneseConfig, GroupViTConfig, HubertConfig, IBertConfig, JukeboxConfig, LayoutLMConfig, LayoutLMv2Config, LayoutLMv3Config, LEDConfig, LiltConfig, LlamaConfig, LongformerConfig, LongT5Config, LukeConfig, LxmertConfig, M2M100Config, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MgpstrConfig, MobileBertConfig, MPNetConfig, MT5Config, MvpConfig, NezhaConfig, NllbMoeConfig, NystromformerConfig, OneFormerConfig, OpenAIGPTConfig, OPTConfig, OwlViTConfig, PegasusConfig, PegasusXConfig, PerceiverConfig, Pix2StructConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, RagConfig, RealmConfig, ReformerConfig, RemBertConfig, RetriBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2TextConfig, Speech2Text2Config, SpeechT5Config, SplinterConfig, SqueezeBertConfig, SwitchTransformersConfig, T5Config, TapasConfig, TransfoXLConfig, ViltConfig, VisualBertConfig, Wav2Vec2Config, Wav2Vec2ConformerConfig, WhisperConfig, XCLIPConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig, YosoConfig, ExampleConfig.
```
When I print out `TOKENIZER_MAPPING`, I see `ExampleConfig` in it (as in the error message), meaning that the tokenizer registration indeed occurred. However, when I print out `CONFIG_MAPPING_NAMES`, which is the check that fails generating the error, `ExampleConfig` is not in that dictionary.
I have played around with a few different ways of creating the tokenizer class (e.g. actually subclassing `PreTrainedTokenizerFast` instead of just aliasing it), but none of these modifications worked, and the issue seems to be with the registration part rather than the choice of tokenizer.
### Expected behavior
I would expect the `AutoTokenizer.from_pretrained(...)` line from above to run without error
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23338/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23338/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23337
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23337/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23337/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23337/events
|
https://github.com/huggingface/transformers/issues/23337
| 1,707,777,500 |
I_kwDOCUB6oc5lyp3c
| 23,337 |
EncoderDecoderModel forward decoder_attention_mask can't execute the default behavior mentioned in the document
|
{
"login": "efsotr",
"id": 104755879,
"node_id": "U_kgDOBj5ypw",
"avatar_url": "https://avatars.githubusercontent.com/u/104755879?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/efsotr",
"html_url": "https://github.com/efsotr",
"followers_url": "https://api.github.com/users/efsotr/followers",
"following_url": "https://api.github.com/users/efsotr/following{/other_user}",
"gists_url": "https://api.github.com/users/efsotr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/efsotr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/efsotr/subscriptions",
"organizations_url": "https://api.github.com/users/efsotr/orgs",
"repos_url": "https://api.github.com/users/efsotr/repos",
"events_url": "https://api.github.com/users/efsotr/events{/privacy}",
"received_events_url": "https://api.github.com/users/efsotr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @efsotr \r\nThanks for the issue, indeed there seems to be a typo, one could replace the docstring with the correct behavior (the default mask will be created by the decoder)\r\nDo you want to open a Pull Request to address these changes?\r\nThanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,683 | 1,687 | 1,687 |
NONE
| null |
@ArthurZucker @younesbelkada
EncoderDecoderModel forward decoder_attention_mask can't execute the default behavior mentioned in the document.
For example,
EncoderDecoderModel (BertModel, BertLMHeadModel)
In EncoderDecoderModel forward:
decoder_attention_mask will be directly passed in self.decoder as attention_mask. [code link](https://github.com/huggingface/transformers/blob/a3975f94f3a090a54ed4ec78ab736ce6aaee6742/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#LL616C9-L629C10)
~~~python
# Decode
decoder_outputs = self.decoder(
input_ids=decoder_input_ids,
attention_mask=decoder_attention_mask,
encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=attention_mask,
inputs_embeds=decoder_inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
use_cache=use_cache,
past_key_values=past_key_values,
return_dict=return_dict,
**kwargs_decoder,
)
~~~
In BertLMHeadModel forward:
attention_mask will be directly passed in self.bert as attention_mask. [code link](https://github.com/huggingface/transformers/blob/7f8b909189547944617741d8d3c6c84504701693/src/transformers/models/bert/modeling_bert.py#L1234)
~~~python
outputs = self.bert(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=encoder_attention_mask,
past_key_values=past_key_values,
use_cache=use_cache,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
~~~
In BertModel forward:
attention_mask will be filled by number one if it is None. [code link](https://github.com/huggingface/transformers/blob/7f8b909189547944617741d8d3c6c84504701693/src/transformers/models/bert/modeling_bert.py#LL980C9-L981C108)
~~~python
if attention_mask is None:
attention_mask = torch.ones(((batch_size, seq_length + past_key_values_length)), device=device)
~~~
if decoder_attention_mask is None in EncoderDecoderModel forward, then it will be filled by number one.
but in [code link](https://github.com/huggingface/transformers/blob/a3975f94f3a090a54ed4ec78ab736ce6aaee6742/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#LL106C9-L108C32)
~~~
decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*):
Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also
be used by default.
~~~
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23337/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23336
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23336/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23336/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23336/events
|
https://github.com/huggingface/transformers/issues/23336
| 1,707,732,691 |
I_kwDOCUB6oc5lye7T
| 23,336 |
to is not supported for `8-bit` models
|
{
"login": "lborcard",
"id": 51543572,
"node_id": "MDQ6VXNlcjUxNTQzNTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/51543572?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lborcard",
"html_url": "https://github.com/lborcard",
"followers_url": "https://api.github.com/users/lborcard/followers",
"following_url": "https://api.github.com/users/lborcard/following{/other_user}",
"gists_url": "https://api.github.com/users/lborcard/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lborcard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lborcard/subscriptions",
"organizations_url": "https://api.github.com/users/lborcard/orgs",
"repos_url": "https://api.github.com/users/lborcard/repos",
"events_url": "https://api.github.com/users/lborcard/events{/privacy}",
"received_events_url": "https://api.github.com/users/lborcard/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @younesbelkada ",
"hi @lborcard \r\nIndeed the `to` operation is not supported for 8bit models as users will most likely encounter unexpected behaviour.\r\nWhat version of `transformers` are you using?\r\nA fix has been introduced in https://github.com/huggingface/transformers/pull/21479 and has been documented [here](https://huggingface.co/docs/transformers/pipeline_tutorial#using-pipeline-on-large-models-with-accelerate) - everything should work fine if you use `transformers>=4.29.0`",
"Hi @younesbelkada ,\r\n\r\nThank you for your answer, I was using version 4.29 but I will try a newer version.\r\n have a good day",
"@lborcard \r\ncan you try:\r\n```python\r\npipeline = pipeline(\"text-generation\",tokenizer = tokenizer, model=model, device=0)\r\n```",
"I'm still getting this error on the latest version of the transformer. Any work around?",
"@mrhimanshu can you share an handy reproducible snippet?",
"I'm also still getting this error when using transformers==4.3.0.0 version. Anyone figured any work-around?",
"hi everyone,\r\nthanks for raising up the issue, I would greatly appreciate if you could share a reproducible snippet as I can't do anything without it",
"Thanks @younesbelkada , the error is in the file \r\n/python3.8/site-packages/transformers/modeling_utils.py\r\n def half(self, *args):\r\n # Checks if the model has been loaded in 8-bit\r\n if getattr(self, \"is_quantized\", False):\r\n raise ValueError(\r\n \"`.half()` is not supported for `4-bit` or `8-bit` models. Please use the model as it is, since the\"\r\n \" model has already been casted to the correct `dtype`.\"\r\n )\r\n else:\r\n return super().half(*args)\r\n \r\n \r\n File \"/python3.8/site-packages/transformers/modeling_utils.py\", line 1907, in half\r\n raise ValueError(\r\nValueError: `.half()` is not supported for `4-bit` or `8-bit` models. Please use the model as it is, since the model has already been casted to the correct `dtype`. \r\n\r\nPlease check it out. ",
"Hi @22Mukesh22 \r\nThank you for your message, I think you are somehow calling `.half` in your script , can you share a handy small snippet to reproduce?\r\n",
"This .half only comes in picture when passing load_in_8bit=True , else if we remove this from the script , it gives memory error . \r\nCuda out of memory , as I have P40 24 GB 4 GPU. ",
"I am usign the same script \"https://github.com/Xirider/finetune-gpt2xl\" to finetune the starcoder model.\r\n ",
"Looking at the repo you shared I think that you are trying to use DeepSpeed + bitsandbytes and purely fine tune the entire model in 8bit or 4bit. This is not supported. \r\n\r\nYou should look into PEFT library if you want to fine-tune the model in 8bit or 4bit to fine-tune adapters on top of the base model (which leads to the same results), some examples here: https://github.com/huggingface/peft/tree/main/examples/int8_training \r\n\r\nAnd the documentation is here: https://huggingface.co/docs/peft/index",
"Thanks a lot , I will try and update ",
"ValueError: You can't train a model that has been loaded in 8-bit precision on multiple devices in any distributed mode. In order to use 8-bit models that have been loaded across multiple GPUs the solution is to use Naive Pipeline Parallelism. Therefore you should not specify that you are under any distributed regime in your accelerate config.",
"@22Mukesh22 can you please update your `accelerate` version?\r\n```bash\r\npip install --upgrade accelerate\r\n```\r\nRelated: https://github.com/huggingface/accelerate/pull/1523",
"Okay Sure @younesbelkada , i will re run and update if issue still comes.",
"@younesbelkada still the issues remaisn same . I have upgraded accelerate . It doesn't works ",
"Hi @22Mukesh22\r\nHow do you run your script? can you share the accelerate config you are using? Also let's open a different ticket on accelerate for the issue you are facing and ping me there\r\nThanks! ",
"for everyone stumbling into this error, my solution was to use accelerate 0.20.3 and transformers 4.30.2 (not necceserally needed). With those versions the training started correctly.",
"> for everyone stumbling into this error, my solution was to use accelerate 0.20.3 and transformers 4.30.2 (not necceserally needed). With those versions the training started correctly.\r\n\r\nThanks working!",
"> for everyone stumbling into this error, my solution was to use accelerate 0.20.3 and transformers 4.30.2 (not necceserally needed). With those versions the training started correctly.\r\n\r\nThank you! This worked for me. I'll try and investigate what went wrong. For reference, this was the traceback:\r\n```\r\nFile \"finetune.py\", line 552, in main\r\n model = AutoModelForCausalLM.from_pretrained(\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/models/auto/auto_factory.py\", line 484, in from_pretrained\r\n return model_class.from_pretrained(\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py\", line 2937, in from_pretrained\r\n dispatch_model(model, **kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/accelerate/big_modeling.py\", line 391, in dispatch_model\r\n model.to(device)\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py\", line 1897, in to\r\n raise ValueError(\r\nValueError: `.to` is not supported for `4-bit` or `8-bit` models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.\r\n```",
"still suffering this issue with accelerate 0.20.3 and transformers 4.30.2, getting \"\r\nValueError: `.to` is not supported for `4-bit` or `8-bit` models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.\r\n\"",
"add that i'm using the bnb_4bit, as follows\r\n```\r\nquant_config = BitsAndBytesConfig(\r\n load_in_4bit=True,\r\n bnb_4bit_use_double_quant=True,\r\n bnb_4bit_quant_type=\"nf4\",\r\n bnb_4bit_compute_dtype=torch.bfloat16\r\n)\r\n```\r\n",
"> ๆทปๅ ๆๆญฃๅจไฝฟ็จbnb_4bit๏ผๅฆไธๆ็คบ\r\n> \r\n> ```\r\n> quant_config = BitsAndBytesConfig(\r\n> load_in_4bit=True,\r\n> bnb_4bit_use_double_quant=True,\r\n> bnb_4bit_quant_type=\"nf4\",\r\n> bnb_4bit_compute_dtype=torch.bfloat16\r\n> )\r\n> ```\r\n\r\nI also meet the problem when I set CUDA_VISIBLE_DEVICES=\"0\" in ,sh file. \r\nHowever , when I delete this command or I set CUDA_VISIBLE_DEVICES=\"0,1\" or \"0,1,2,3\" . It can work.\r\n(But I want to save GPU memory and qlora paper say it can work on one GPU ",
"> > ๆทปๅ ๆๆญฃๅจไฝฟ็จbnb_4bit๏ผๅฆไธๆ็คบ\r\n> > ```\r\n> > quant_config = BitsAndBytesConfig(\r\n> > load_in_4bit=True,\r\n> > bnb_4bit_use_double_quant=True,\r\n> > bnb_4bit_quant_type=\"nf4\",\r\n> > bnb_4bit_compute_dtype=torch.bfloat16\r\n> > )\r\n> > ```\r\n> \r\n> I also meet the problem when I set CUDA_VISIBLE_DEVICES=\"0\" in ,sh file. However , when I delete this command or I set CUDA_VISIBLE_DEVICES=\"0,1\" or \"0,1,2,3\" . It can work. (But I want to save GPU memory and qlora paper say it can work on one GPU\r\n\r\n\r\n\r\n> for everyone stumbling into this error, my solution was to use accelerate 0.20.3 and transformers 4.30.2 (not necceserally needed). With those versions the training started correctly.\r\n\r\n accelerate 0.20.3 works on one GPU and mult GPU(<=4)",
"I encountered similar issue. I tried CUDA_VISIBLE_DEVICES=1,2,3. But 8-bit llama is automatically loaded to cuda:0. and I cannot apply \".to('cuda:1') \" which gives me ths error, 'to is not supported ....' \r\n",
"Even if I use non 8-bit model, the model is still automatically loaded to cuda:0 when i sepcify CUDA_VISIBLE_DEVICES=1 sh run.sh. \r\n\r\n model = LLaMAForCausalLM.from_pretrained(\r\n \"decapoda-research/llama-7b-hf\",\r\n load_in_8bit=False,\r\n torch_dtype=torch.float16,\r\n device_map=\"auto\",\r\n )\r\n \r\n print('model_cuda_device {}'.format(model.device))\r\n\r\n//output:\r\nmodel_cuda_device cuda:0\r\n\r\n",
"@andotalao24 \r\nTo check the devices with a model that has been loaded with `device_map=xxx` you need to call `set(model.hf_device_map.values())`",
"I have been having the same issue, but I don't know if this is related to hardware. because I got the Error in an 8xA100 with cuda 11.8 but work perfectly in an 8xA100SMX cuda 11.7 (RunPod machines)"
] | 1,683 | 1,705 | 1,690 |
NONE
| null |
### System Info
Hi,
I am using a Llama model and wanted to add to pipeline class but it throws me an error when building the pipeline class.
Does anyone have a solution to this?
thank you!
@Narsil
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
> ## Model
> model = AutoModelForCausalLM.from_pretrained(
> model_name,
> device_map='auto',
> load_in_8bit=True,
> max_memory=max_memory)
>
> ## llm class
>
> class CustomLLM(LLM):
>
> pipeline = pipeline("text-generation",tokenizer = tokenizer, model=model, device="cuda:0")
>
> def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str:
> prompt_length = len(prompt)
> response = self.pipeline(prompt, max_new_tokens=num_output)[0]["generated_text"]
>
> # only return newly generated tokens
> return response[prompt_length:]
>
> @property
> def _identifying_params(self) -> Mapping[str, Any]:
> return {"name_of_model": self.model_name}
>
> @property
> def _llm_type(self) -> str:
> return "custom"
>
" model has already been set to the correct devices and casted to the correct `dtype`."
### Expected behavior
1879 # Checks if the model has been loaded in 8-bit
1880 if getattr(self, "is_loaded_in_8bit", False):
-> 1881 raise ValueError(
1882 ".to is not supported for 8-bit models. Please use the model as it is, since the"
1883 " model has already been set to the correct devices and casted to the correct dtype."
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23336/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23335
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23335/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23335/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23335/events
|
https://github.com/huggingface/transformers/pull/23335
| 1,707,724,504 |
PR_kwDOCUB6oc5QZKEx
| 23,335 |
Fix chat prompt in HFAgent
|
{
"login": "IvanSedykh",
"id": 46825716,
"node_id": "MDQ6VXNlcjQ2ODI1NzE2",
"avatar_url": "https://avatars.githubusercontent.com/u/46825716?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IvanSedykh",
"html_url": "https://github.com/IvanSedykh",
"followers_url": "https://api.github.com/users/IvanSedykh/followers",
"following_url": "https://api.github.com/users/IvanSedykh/following{/other_user}",
"gists_url": "https://api.github.com/users/IvanSedykh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IvanSedykh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IvanSedykh/subscriptions",
"organizations_url": "https://api.github.com/users/IvanSedykh/orgs",
"repos_url": "https://api.github.com/users/IvanSedykh/repos",
"events_url": "https://api.github.com/users/IvanSedykh/events{/privacy}",
"received_events_url": "https://api.github.com/users/IvanSedykh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,683 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
There was a bug in formatting prompts in the chat mode. Actually, user-provided custom prompts were never used.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger, please review this PR
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23335/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23335/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23335",
"html_url": "https://github.com/huggingface/transformers/pull/23335",
"diff_url": "https://github.com/huggingface/transformers/pull/23335.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23335.patch",
"merged_at": 1684243139000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23334
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23334/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23334/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23334/events
|
https://github.com/huggingface/transformers/issues/23334
| 1,707,628,928 |
I_kwDOCUB6oc5lyFmA
| 23,334 |
gpt2-large and gpt2-xl behave strangely with pad tokens
|
{
"login": "boblus",
"id": 42530285,
"node_id": "MDQ6VXNlcjQyNTMwMjg1",
"avatar_url": "https://avatars.githubusercontent.com/u/42530285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/boblus",
"html_url": "https://github.com/boblus",
"followers_url": "https://api.github.com/users/boblus/followers",
"following_url": "https://api.github.com/users/boblus/following{/other_user}",
"gists_url": "https://api.github.com/users/boblus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/boblus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/boblus/subscriptions",
"organizations_url": "https://api.github.com/users/boblus/orgs",
"repos_url": "https://api.github.com/users/boblus/repos",
"events_url": "https://api.github.com/users/boblus/events{/privacy}",
"received_events_url": "https://api.github.com/users/boblus/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @boblus \r\nIn order for your script to work, you need to properly set the attention mask by masking out the padding tokens and call generate with the attention mask",
"Do you mean `tokenizer.add_special_tokens({'pad_token': pad_token})`?\r\nI have done this. I forgot to put that in my initial comment. Just added it.",
"I think that attention masks are created only if you pass multiple sentences to the tokenizer, in your case I think you may need to create it manually unless I am wrong and there is a simpler solution.\r\n\r\nThe below script worked for me:\r\n```python\r\nimport torch\r\nfrom transformers import GPT2Tokenizer, GPT2LMHeadModel\r\n\r\npretrained = 'gpt2-large'\r\ndevice = 'cuda'\r\nmodel = GPT2LMHeadModel.from_pretrained(pretrained).to('cuda')\r\ntokenizer = GPT2Tokenizer.from_pretrained(pretrained)\r\n\r\ntorch.cuda.manual_seed_all(2266)\r\n\r\ninput0 = '<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|>Austin knew Quinn intimately and they slept together many times. Why did Austin do this? (0) Hated Quinn, (1) Found QUinn attractive, (2) Ask Quinn on a date\\nANSWER:'\r\n\r\ntokenizer.pad_token = tokenizer.eos_token\r\ntokenizer.pad_token_id = tokenizer.eos_token_id\r\n\r\ninputs = tokenizer(input0, return_tensors='pt').to(device)\r\ninputs[\"attention_mask\"] = inputs[\"input_ids\"].ne(tokenizer.pad_token_id).long().to(device)\r\nwith torch.no_grad():\r\n output0 = model.generate(**inputs, max_new_tokens=16, top_k=20, pad_token_id=50256, eos_token_id=50256, do_sample=True, temperature=0.01)\r\n print(tokenizer.decode(output0[0]))\r\n\r\ninput1 = 'Austin knew Quinn intimately and they slept together many times. Why did Austin do this? (0) Hated Quinn, (1) Found QUinn attractive, (2) Ask Quinn on a date\\nANSWER:'\r\ninput_ids = tokenizer(input1, return_tensors='pt').input_ids.to(device)\r\nwith torch.no_grad():\r\n output1 = model.generate(input_ids, max_new_tokens=8, top_k=20, pad_token_id=50256, eos_token_id=50256, do_sample=True, temperature=0.01)\r\n print(tokenizer.decode(output1[0]))\r\n```",
"Thanks @younesbelkada, it works now. I thought setting `pad_token_id` in `model.generate()` is equivalent to setting the attention mask. And my codes worked well with other models, that's why it confused me.",
"linked to #22155 and #21080. GPT2 is an old model and does not necessarly create everything by default like our recent models"
] | 1,683 | 1,685 | 1,683 |
NONE
| null |
### System Info
- `transformers` version: 4.18.0
- Platform: Linux-4.18.0-425.13.1.el8_7.x86_64-x86_64-with-centos-8.7-Green_Obsidian
- Python version: 3.6.8
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.10.2+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
pretrained = 'gpt2-large'
model = GPT2LMHeadModel.from_pretrained(pretrained).to('cuda')
tokenizer = AutoTokenizer.from_pretrained(pretrained, padding_side='left')
tokenizer.add_special_tokens({'pad_token': <|endoftext|>})
torch.cuda.manual_seed_all(2266)
input0 = '<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|>Austin knew Quinn intimately and they slept together many times. Why did Austin do this? (0) Hated Quinn, (1) Found QUinn attractive, (2) Ask Quinn on a date\nANSWER:'
input_ids = tokenizer(input0, return_tensors='pt').input_ids.to(device)
with torch.no_grad():
output0 = model.generate(input_ids, max_new_tokens=16, top_k=20, pad_token_id=50256, eos_token_id=50256, do_sample=True, temperature=0.01)
input1 = 'Austin knew Quinn intimately and they slept together many times. Why did Austin do this? (0) Hated Quinn, (1) Found QUinn attractive, (2) Ask Quinn on a date\nANSWER:'
input_ids = tokenizer(input1, return_tensors='pt').input_ids.to(device)
with torch.no_grad():
output1 = model.generate(input_ids, max_new_tokens=8, top_k=20, pad_token_id=50256, eos_token_id=50256, do_sample=True, temperature=0.01)
```
### Expected behavior
```
output0 = '\nThe The The The The'
output1 = ' Austin was jealous of Quinn's relationship'
```
pad_token works just fine with other gpt models such as gpt2, and gpt2-small.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23334/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23334/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23333
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23333/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23333/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23333/events
|
https://github.com/huggingface/transformers/issues/23333
| 1,707,586,057 |
I_kwDOCUB6oc5lx7IJ
| 23,333 |
ConvNextImageProcessor / ViTImageProcessor produce inf when do_rescale = False
|
{
"login": "guillermojp",
"id": 112631806,
"node_id": "U_kgDOBraf_g",
"avatar_url": "https://avatars.githubusercontent.com/u/112631806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guillermojp",
"html_url": "https://github.com/guillermojp",
"followers_url": "https://api.github.com/users/guillermojp/followers",
"following_url": "https://api.github.com/users/guillermojp/following{/other_user}",
"gists_url": "https://api.github.com/users/guillermojp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guillermojp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guillermojp/subscriptions",
"organizations_url": "https://api.github.com/users/guillermojp/orgs",
"repos_url": "https://api.github.com/users/guillermojp/repos",
"events_url": "https://api.github.com/users/guillermojp/events{/privacy}",
"received_events_url": "https://api.github.com/users/guillermojp/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @guillermojp, thanks for reporting this issue. \r\n\r\nThe image processor does convert the pixel between `[0, 1]` explicitly - it's happens in its own independent rescaling step (not within other logic) and has its own flag to control this behaviour. If you want the image to have its values set between 0-1 then `do_rescale` should be set to `True`. \r\n\r\nThe reason for `inf` is because of [this line](https://github.com/huggingface/transformers/blob/7f8b909189547944617741d8d3c6c84504701693/src/transformers/image_transforms.py#LL372C6-L372C6): the mean and std used to normalize the image are cast to the input image dtype. In this case, the image_std `[0.229, 0.224, 0.225]` when converted to `uint8` becomes `[0, 0, 0]`. Arguably this isn't obvious and perhaps we should think about possible warnings here when normalizing e.g. if the input is of an integer type. ",
"I think I misunderstood the documentation of the `ConvNextImageProcessor` to this regard, as the text description `do_rescale (bool, optional, defaults to True) โ Whether to rescale the image by the specified scale rescale_factor. Can be overriden by do_rescale in the preprocess method.` wasn't clear to me. Maybe I'd recommend generating a more comprehensive description in the documentation.\r\n\r\nMarking as closed"
] | 1,683 | 1,684 | 1,684 |
NONE
| null |
### System Info
- `transformers` version: 4.29.1
- Platform: Linux-5.15.0-1022-aws-x86_64-with-glibc2.31
- Python version: 3.9.4
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@amyeroberts
### Information
- [x] The official example scripts
- [X] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
import numpy as np
from transformers import AutoImageProcessor
>>> processor = AutoImageProcessor.from_pretrained("microsoft/resnet-50")
>>> inputs = np.random.randint(0, 256, size=(224,224,3)).astype("uint8")
>>> processor(inputs, return_tensors="np", do_rescale=False)["pixel_values"]
array([[[[inf, inf, inf, ..., inf, inf, inf],
[inf, inf, inf, ..., inf, inf, inf],
[inf, inf, inf, ..., inf, inf, inf],
...,
[inf, inf, inf, ..., inf, inf, inf],
[inf, inf, inf, ..., inf, inf, inf],
[inf, inf, inf, ..., inf, inf, inf]],
[[inf, inf, inf, ..., inf, inf, inf],
[inf, inf, inf, ..., inf, inf, inf],
[inf, inf, inf, ..., inf, inf, inf],
...,
[inf, inf, inf, ..., nan, inf, inf],
[inf, inf, inf, ..., inf, inf, inf],
[inf, inf, inf, ..., inf, inf, inf]],
[[inf, inf, inf, ..., inf, inf, inf],
[inf, inf, inf, ..., inf, inf, inf],
[inf, inf, inf, ..., inf, inf, inf],
...,
[inf, inf, inf, ..., inf, inf, inf],
[inf, inf, inf, ..., inf, inf, inf],
[inf, inf, inf, ..., inf, inf, inf]]]])
```
### Expected behavior
AutoImageProcessor (whichever, I've tried a ConvNext and a Vit) should convert to [0 - 1] range explicitly (specifically [here](https://github.com/huggingface/transformers/blob/v4.29.1/src/transformers/models/convnext/image_processing_convnext.py#L288), if you're counting) before `do_normalize`. It's currently done implicitly, requiring `do_rescale=True` or otherwise you have undefined behaviour and only some warining.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23333/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23333/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23332
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23332/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23332/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23332/events
|
https://github.com/huggingface/transformers/pull/23332
| 1,707,585,760 |
PR_kwDOCUB6oc5QYsD_
| 23,332 |
Compute the mask in-place, with less memory reads, and on CUDA on `XLNetLMHeadModel`
|
{
"login": "lezcano",
"id": 3291265,
"node_id": "MDQ6VXNlcjMyOTEyNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3291265?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lezcano",
"html_url": "https://github.com/lezcano",
"followers_url": "https://api.github.com/users/lezcano/followers",
"following_url": "https://api.github.com/users/lezcano/following{/other_user}",
"gists_url": "https://api.github.com/users/lezcano/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lezcano/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lezcano/subscriptions",
"organizations_url": "https://api.github.com/users/lezcano/orgs",
"repos_url": "https://api.github.com/users/lezcano/repos",
"events_url": "https://api.github.com/users/lezcano/events{/privacy}",
"received_events_url": "https://api.github.com/users/lezcano/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,683 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
When working on TorchInductor, I realised that there was a part from `XLNetLMHeadModel` that was being compiled to CPU code.
This PR should allow to fuse this operation with other CUDA operations in `torch.compile`. It also should be faster on eager mode, as it has a this implementation has a lower foot-print.
If in-place operations are not allowed even in non-grad context, I still believe that doing ones + tril rather than a ones + tril + zeros + cat should be faster simply due to the number of memory reads/writes.
I tested that this code produces the same results for `0 <= qlen,mlen < 10` and `same_length in (True, False)`.
@ArthurZucker @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23332/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23332/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23332",
"html_url": "https://github.com/huggingface/transformers/pull/23332",
"diff_url": "https://github.com/huggingface/transformers/pull/23332.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23332.patch",
"merged_at": 1683898537000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23331
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23331/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23331/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23331/events
|
https://github.com/huggingface/transformers/issues/23331
| 1,707,556,004 |
I_kwDOCUB6oc5lxzyk
| 23,331 |
RuntimeError: The size of tensor a (16) must match the size of tensor b (16000) at non-singleton dimension 2
|
{
"login": "Tylersuard",
"id": 41713505,
"node_id": "MDQ6VXNlcjQxNzEzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/41713505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tylersuard",
"html_url": "https://github.com/Tylersuard",
"followers_url": "https://api.github.com/users/Tylersuard/followers",
"following_url": "https://api.github.com/users/Tylersuard/following{/other_user}",
"gists_url": "https://api.github.com/users/Tylersuard/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tylersuard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tylersuard/subscriptions",
"organizations_url": "https://api.github.com/users/Tylersuard/orgs",
"repos_url": "https://api.github.com/users/Tylersuard/repos",
"events_url": "https://api.github.com/users/Tylersuard/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tylersuard/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3081136536,
"node_id": "MDU6TGFiZWwzMDgxMTM2NTM2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Difficult%20Issue",
"name": "Good Difficult Issue",
"color": "684CC7",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] |
[
"Hi @Tylersuard, thanks for reporting this issue. \r\n\r\nSo that we can best try and help you, could you update the notebook so that it contains the minimal logic to replicate the error and can be run out-of-the-box? As it stands, there's many blocks with comments; references to loading / processing data we don't have access to; doesn't currently have the reported error shown but does have many other errors. ",
"Sorry @amyeroberts , Here is the updated version: https://colab.research.google.com/drive/1TFI84P9W4VPhNLgEngxPN57RwzS0C4bG?usp=sharing",
"I think you're splitting your input sequence into chunks of length 16: https://github.com/huggingface/transformers/blob/v4.29.1/src/transformers/models/mega/modeling_mega.py#L1063",
"@OllieBroadhurst That is correct. As per the documentation (https://huggingface.co/docs/transformers/main/model_doc/mega) , I set the chunk_size equal to 16 and use_chunking to true, and the context length is a multiple of the chunk size. My problem is not solved.",
"What I mean is have you tried turning chunking off?",
"@OllieBroadhurst Thank you for your suggestion. I would likely run into out-of-memory errors, but I will try it.",
"Ok I tried it without chunking and I got out-of-memory errors.",
"This should still be adressed! Mega's forward pass might need some debugging. I can't do this fast, but keeping an eye on it! ",
"Did not have time to dive into this. Marking as good second issue in case community want to have a go! ",
"I would like to have a go at this @ArthurZucker!",
"Sure! ๐ ",
"I ran the notebook provided by @Tylersuard on an A6000 with the following settings:\r\n- With `chunk_size=32`: The RuntimeError still persists (I tried this to see if some other multiple of 16 would produce any different of a result)\r\n- With `use_chunking=False`: In this case, the forward pass appears to work fine, but another error is thrown because of the labels.\r\n\r\n Here is that error:\r\n ```Traceback (most recent call last):\r\n File \"/root/hf_trial/copy_of_hf_mega_music_for_issue.py\", line 166, in <module>\r\n trainer.train()\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/trainer.py\", line 1555, in train\r\n return inner_training_loop(\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/trainer.py\", line 1837, in _inner_training_loop\r\n tr_loss_step = self.training_step(model, inputs)\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/trainer.py\", line 2682, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/trainer.py\", line 2707, in compute_loss\r\n outputs = model(**inputs)\r\n File \"/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/models/mega/modeling_mega.py\", line 1772, in forward\r\n lm_loss = loss_fct(shifted_prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))\r\n File \"/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/torch/nn/modules/loss.py\", line 1174, in forward\r\n return F.cross_entropy(input, target, weight=self.weight,\r\n File \"/opt/conda/lib/python3.10/site-packages/torch/nn/functional.py\", line 3029, in cross_entropy\r\n return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)\r\nRuntimeError: \"nll_loss_forward_reduce_cuda_kernel_2d_index\" not implemented for 'Int'\r\n```\r\n\r\nNow this error is perhaps out of the scope of this issue so I will proceed to debug the forward pass with `use_chunking=True`\r\n\r\ncc @ArthurZucker, @amyeroberts ",
"Hi @ArthurZucker, I found what was causing the error as well as a 'potential' fix (I am not too sure about it so need some feedback on it!).\r\n\r\n## Cause\r\nThe error in Line [#866](https://github.com/huggingface/transformers/blob/015f8e110d270a0ad42de4ae5b98198d69eb1964/src/transformers/models/mega/modeling_mega.py#L866) was being caused because the `torch.matmul(query, key.transpose(2, 3))` was being divided by `lengths`, which was created in Line [#854-55](https://github.com/huggingface/transformers/blob/015f8e110d270a0ad42de4ae5b98198d69eb1964/src/transformers/models/mega/modeling_mega.py#L854-L855) from the `causal_mask`. \r\n\r\nWhen using a `chunk_size`, the `causal_mask` dimensions are of the form `[batch_size, 1, target_length, target_length]` which, after being summed using `causal_mask.sum(dim=-1, keepdim=True)` makes the `lengths` to be of the form: `[batch_size, 1, target_length, 1]`.\r\n\r\nIn reality, with a provided `chunk_size`, the `causal_mask` should be of the dim: `[batch_size, 1, chunk_size, chunk_size]` and in turn the `lengths` should be of the dim: `[batch_size, 1, chunk_size, 1]` for the aforementioned division to work.\r\n\r\n## Potential Fix\r\nI have created a PR that adds a simple condition to check if chunking is enabled, in which case it will make the `input_shape` to be `[batch_size, chunk_size]`.\r\n\r\nPS: Apologies for this rather odd-looking comment but there were just too many technicalities to explain here!",
"No worries and sounds good actually! "
] | 1,683 | 1,693 | 1,693 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.30.0.dev0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu118 (False)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (cpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Run this notebook: https://colab.research.google.com/drive/1TFI84P9W4VPhNLgEngxPN57RwzS0C4bG?usp=sharing
### Expected behavior
Expected the model to train successfully. Instead it gives a tensor mismatch error.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23331/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23331/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23329
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23329/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23329/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23329/events
|
https://github.com/huggingface/transformers/pull/23329
| 1,707,473,241 |
PR_kwDOCUB6oc5QYTV6
| 23,329 |
Add ffmpeg install for doctests
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,683 | 1,686 | 1,686 |
COLLABORATOR
| null |
# What does this PR do?
`tests_pr_documentation_tests` fails if run on some docs e.g. `docs/source/en/task_summary.mdx` as ffmpeg is not installed.
Install ffmpeg in `pr_documentation_tests` CircleCI job.
Despite trying to add changes to the code and docstrings - I was unable to trigger the tests in the CI suite for the doc tests :(
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23329/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23329/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23329",
"html_url": "https://github.com/huggingface/transformers/pull/23329",
"diff_url": "https://github.com/huggingface/transformers/pull/23329.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23329.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23328
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23328/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23328/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23328/events
|
https://github.com/huggingface/transformers/issues/23328
| 1,707,423,374 |
I_kwDOCUB6oc5lxTaO
| 23,328 |
Problem with Huggingface Agent
|
{
"login": "piust",
"id": 42667376,
"node_id": "MDQ6VXNlcjQyNjY3Mzc2",
"avatar_url": "https://avatars.githubusercontent.com/u/42667376?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/piust",
"html_url": "https://github.com/piust",
"followers_url": "https://api.github.com/users/piust/followers",
"following_url": "https://api.github.com/users/piust/following{/other_user}",
"gists_url": "https://api.github.com/users/piust/gists{/gist_id}",
"starred_url": "https://api.github.com/users/piust/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/piust/subscriptions",
"organizations_url": "https://api.github.com/users/piust/orgs",
"repos_url": "https://api.github.com/users/piust/repos",
"events_url": "https://api.github.com/users/piust/events{/privacy}",
"received_events_url": "https://api.github.com/users/piust/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hello @piust \r\n\r\nThe error message you are seeing suggests that there is a size mismatch between two tensors, 'a' and 'b', at dimension 1. This typically occurs when the dimensions of the tensors involved in a calculation are not compatible.\r\n\r\nBut the error message doesn't seem to be specific enough to determine the exact cause of the error. It is possible that there is a problem with the code that was passed to the agent, or there could be a problem with the image that was passed to it. One possible solution would be to try running the code on a different image to see if the error persists.",
"Hi @piust, \r\n\r\nFor the image, are you able to reproduce the error if you read in the PNG equivalent? i.e. is it possible to save out the image, read it back and run the agent script again e.g.:\r\n\r\n```python\r\nfrom PIL import Image\r\n\r\nimage_path = \"dart.webp\"\r\nimage = Image.open(image_path)\r\n\r\n# Save out the image as a PNG\r\nimage.save(\"dart.png\")\r\n\r\n# Read in the image from PNG format\r\nimage = Image.open(\"dart.png\")\r\n\r\n# Then pass the image to the agent\r\nfrom transformers import HfAgent\r\nagent = HfAgent(\"https://api-inference.huggingface.co/models/bigcode/starcoder\")\r\nimage2 = agent.run(\r\n \"Draw a red line all around dart vader body in `image`\", \r\n image=image\r\n)\r\n```\r\n\r\nPNG images are sharable within issues, and so if it also triggers an error we could debug from that.",
"Yes, it triggers a error:\r\n\r\n==Explanation from the agent==\r\nI will use the following tools: `image_segmenter` to create a segmentation mask of the dart vader body, then `image_transformer` to draw a red line around it.\r\n\r\n\r\n==Code generated by the agent==\r\nmask = image_segmenter(image=image, label=\"Dart Vader\")\r\nimage = image_transformer(image=image, prompt=\"Red line around dart vader body\")\r\n\r\n\r\n==Result==\r\nDownloading (โฆ)ge_transformation.py: 100%\r\n2.05k/2.05k [00:00<00:00, 155kB/s]\r\nA new version of the following files was downloaded from https://huggingface.co/space/huggingface-tools/image-transformation:\r\n- image_transformation.py\r\n. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.\r\nDownloading (โฆ)rocessor_config.json: 100%\r\n380/380 [00:00<00:00, 29.2kB/s]\r\nDownloading (โฆ)okenizer_config.json: 100%\r\n974/974 [00:00<00:00, 73.6kB/s]\r\nDownloading (โฆ)olve/main/vocab.json: 100%\r\n1.06M/1.06M [00:00<00:00, 3.26MB/s]\r\nDownloading (โฆ)olve/main/merges.txt: 100%\r\n525k/525k [00:00<00:00, 6.42MB/s]\r\nDownloading (โฆ)cial_tokens_map.json: 100%\r\n472/472 [00:00<00:00, 38.5kB/s]\r\nDownloading (โฆ)lve/main/config.json: 100%\r\n4.73k/4.73k [00:00<00:00, 266kB/s]\r\nDownloading pytorch_model.bin: 100%\r\n603M/603M [00:01<00:00, 315MB/s]\r\nโญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ\r\nโ in <cell line: 15>:15 โ\r\nโ โ\r\nโ /usr/local/lib/python3.10/dist-packages/transformers/tools/agents.py:323 in run โ\r\nโ โ\r\nโ 320 โ โ if not return_code: โ\r\nโ 321 โ โ โ print(\"\\n\\n==Result==\") โ\r\nโ 322 โ โ โ self.cached_tools = resolve_tools(code, self.toolbox, remote=remote, cached_ โ\r\nโ โฑ 323 โ โ โ return evaluate(code, self.cached_tools, state=kwargs.copy()) โ\r\nโ 324 โ โ else: โ\r\nโ 325 โ โ โ tool_code = get_tool_creation_code(code, self.toolbox, remote=remote) โ\r\nโ 326 โ โ โ return f\"{tool_code}\\n{code}\" โ\r\nโ โ\r\nโ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:61 in evaluate โ\r\nโ โ\r\nโ 58 โ result = None โ\r\nโ 59 โ for idx, node in enumerate(expression.body): โ\r\nโ 60 โ โ try: โ\r\nโ โฑ 61 โ โ โ line_result = evaluate_ast(node, state, tools) โ\r\nโ 62 โ โ except InterpretorError as e: โ\r\nโ 63 โ โ โ msg = f\"Evaluation of the code stopped at line {idx} before the end because โ\r\nโ 64 โ โ โ if chat_mode: โ\r\nโ โ\r\nโ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:98 in โ\r\nโ evaluate_ast โ\r\nโ โ\r\nโ 95 โ if isinstance(expression, ast.Assign): โ\r\nโ 96 โ โ # Assignement -> we evaluate the assignement which should update the state โ\r\nโ 97 โ โ # We return the variable assigned as it may be used to determine the final resul โ\r\nโ โฑ 98 โ โ return evaluate_assign(expression, state, tools) โ\r\nโ 99 โ elif isinstance(expression, ast.Call): โ\r\nโ 100 โ โ # Function call -> we return the value of the function call โ\r\nโ 101 โ โ return evaluate_call(expression, state, tools) โ\r\nโ โ\r\nโ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:139 in โ\r\nโ evaluate_assign โ\r\nโ โ\r\nโ 136 โ\r\nโ 137 def evaluate_assign(assign, state, tools): โ\r\nโ 138 โ var_names = assign.targets โ\r\nโ โฑ 139 โ result = evaluate_ast(assign.value, state, tools) โ\r\nโ 140 โ โ\r\nโ 141 โ if len(var_names) == 1: โ\r\nโ 142 โ โ state[var_names[0].id] = result โ\r\nโ โ\r\nโ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:101 in โ\r\nโ evaluate_ast โ\r\nโ โ\r\nโ 98 โ โ return evaluate_assign(expression, state, tools) โ\r\nโ 99 โ elif isinstance(expression, ast.Call): โ\r\nโ 100 โ โ # Function call -> we return the value of the function call โ\r\nโ โฑ 101 โ โ return evaluate_call(expression, state, tools) โ\r\nโ 102 โ elif isinstance(expression, ast.Constant): โ\r\nโ 103 โ โ # Constant -> just return the value โ\r\nโ 104 โ โ return expression.value โ\r\nโ โ\r\nโ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:167 in โ\r\nโ evaluate_call โ\r\nโ โ\r\nโ 164 โ # Todo deal with args โ\r\nโ 165 โ args = [evaluate_ast(arg, state, tools) for arg in call.args] โ\r\nโ 166 โ kwargs = {keyword.arg: evaluate_ast(keyword.value, state, tools) for keyword in call โ\r\nโ โฑ 167 โ return func(*args, **kwargs) โ\r\nโ 168 โ\r\nโ 169 โ\r\nโ 170 def evaluate_subscript(subscript, state, tools): โ\r\nโ โ\r\nโ /usr/local/lib/python3.10/dist-packages/transformers/tools/base.py:536 in __call__ โ\r\nโ โ\r\nโ 533 โ โ โ\r\nโ 534 โ โ encoded_inputs = self.encode(*args, **kwargs) โ\r\nโ 535 โ โ encoded_inputs = send_to_device(encoded_inputs, self.device) โ\r\nโ โฑ 536 โ โ outputs = self.forward(encoded_inputs) โ\r\nโ 537 โ โ outputs = send_to_device(outputs, \"cpu\") โ\r\nโ 538 โ โ return self.decode(outputs) โ\r\nโ 539 โ\r\nโ โ\r\nโ /usr/local/lib/python3.10/dist-packages/transformers/tools/image_segmentation.py:52 in forward โ\r\nโ โ\r\nโ 49 โ โ\r\nโ 50 โ def forward(self, inputs): โ\r\nโ 51 โ โ with torch.no_grad(): โ\r\nโ โฑ 52 โ โ โ logits = self.model(**inputs).logits โ\r\nโ 53 โ โ return logits โ\r\nโ 54 โ โ\r\nโ 55 โ def decode(self, outputs): โ\r\nโ โ\r\nโ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in _call_impl โ\r\nโ โ\r\nโ 1498 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ\r\nโ 1499 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ\r\nโ 1500 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ\r\nโ โฑ 1501 โ โ โ return forward_call(*args, **kwargs) โ\r\nโ 1502 โ โ # Do not call functions when jit is used โ\r\nโ 1503 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ\r\nโ 1504 โ โ backward_pre_hooks = [] โ\r\nโ โ\r\nโ /usr/local/lib/python3.10/dist-packages/transformers/models/clipseg/modeling_clipseg.py:1426 in โ\r\nโ forward โ\r\nโ โ\r\nโ 1423 โ โ โ\r\nโ 1424 โ โ # step 1: forward the query images through the frozen CLIP vision encoder โ\r\nโ 1425 โ โ with torch.no_grad(): โ\r\nโ โฑ 1426 โ โ โ vision_outputs = self.clip.vision_model( โ\r\nโ 1427 โ โ โ โ pixel_values=pixel_values, โ\r\nโ 1428 โ โ โ โ output_attentions=output_attentions, โ\r\nโ 1429 โ โ โ โ output_hidden_states=True, # we need the intermediate hidden states โ\r\nโ โ\r\nโ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in _call_impl โ\r\nโ โ\r\nโ 1498 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ\r\nโ 1499 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ\r\nโ 1500 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ\r\nโ โฑ 1501 โ โ โ return forward_call(*args, **kwargs) โ\r\nโ 1502 โ โ # Do not call functions when jit is used โ\r\nโ 1503 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ\r\nโ 1504 โ โ backward_pre_hooks = [] โ\r\nโ โ\r\nโ /usr/local/lib/python3.10/dist-packages/transformers/models/clipseg/modeling_clipseg.py:867 in โ\r\nโ forward โ\r\nโ โ\r\nโ 864 โ โ if pixel_values is None: โ\r\nโ 865 โ โ โ raise ValueError(\"You have to specify pixel_values\") โ\r\nโ 866 โ โ โ\r\nโ โฑ 867 โ โ hidden_states = self.embeddings(pixel_values) โ\r\nโ 868 โ โ hidden_states = self.pre_layrnorm(hidden_states) โ\r\nโ 869 โ โ โ\r\nโ 870 โ โ encoder_outputs = self.encoder( โ\r\nโ โ\r\nโ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in _call_impl โ\r\nโ โ\r\nโ 1498 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ\r\nโ 1499 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ\r\nโ 1500 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ\r\nโ โฑ 1501 โ โ โ return forward_call(*args, **kwargs) โ\r\nโ 1502 โ โ # Do not call functions when jit is used โ\r\nโ 1503 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ\r\nโ 1504 โ โ backward_pre_hooks = [] โ\r\nโ โ\r\nโ /usr/local/lib/python3.10/dist-packages/transformers/models/clipseg/modeling_clipseg.py:215 in โ\r\nโ forward โ\r\nโ โ\r\nโ 212 โ โ โ\r\nโ 213 โ โ if embeddings.shape[1] != self.num_positions: โ\r\nโ 214 โ โ โ new_shape = int(math.sqrt(embeddings.shape[1] - 1)) โ\r\nโ โฑ 215 โ โ โ embeddings = embeddings + self.interpolate_position_embeddings((new_shape, n โ\r\nโ 216 โ โ โ embeddings = embeddings.to(embeddings.dtype) โ\r\nโ 217 โ โ else: โ\r\nโ 218 โ โ โ embeddings = embeddings + self.position_embedding(self.position_ids) โ\r\nโฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ\r\nRuntimeError: The size of tensor a (3151) must match the size of tensor b (3137) at non-singleton dimension 1\r\n",
"@piust OK, thanks for trying. Could you share the PNG image and the agent being used so that we can reproduce to try and debug? ",
"Hi all,\r\nI confirm that on my set of images, the image segmenter tool has the same behavior with the same issue of matrix dimensions conflicts. Images are PNG of different sizes. Same problem whatever the image dimensions.\r\n\r\n[image sample](https://drive.google.com/file/d/1n95JCjltE1WYBjrktgrbyy4uBB1ckBQD/view?usp=share_link)",
"@jeromemassot Thanks for sharing the image. Using it I was able to track down the issue to [this line](https://github.com/huggingface/transformers/blob/00f6ba0e7ebd5d19bb7d834a709d74dbb8a5a3d9/src/transformers/tools/image_segmentation.py#L47) in the image segmentation tool, where the `size` parameter for the image processor is overridden with the input image dimensions. I've opened an PR to resolve, but will need to check that this isn't removing any assumptions elsewhere with the tools. "
] | 1,683 | 1,686 | 1,686 |
NONE
| null |
### System Info
2023-05-12 10:53:56.623476: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/transformers/commands/env.py:63: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
2023-05-12 10:54:01.265612: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:47] Overriding orig_value setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.29.0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I have a image in webp format and wanted to segment the character in foreground, but I get an error.
Let me know how I can send it to you ... this type of file is not supported
Here is the code:
`
from PIL import Image
def leggi_immagine(percorso):
try:
image = Image.open(percorso)
return image
except IOError:
print("Impossibile aprire l'immagine. Controlla il percorso del file.")
image = leggi_immagine("dart.webp")
image2 = agent.run("Draw a red line all around dart vader body in `image`", image=image)
`
and here is the error:
in <cell line: 1>:1 โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/agents.py:323 in run โ
โ โ
โ 320 โ โ if not return_code: โ
โ 321 โ โ โ print("\n\n==Result==") โ
โ 322 โ โ โ self.cached_tools = resolve_tools(code, self.toolbox, remote=remote, cached_ โ
โ โฑ 323 โ โ โ return evaluate(code, self.cached_tools, state=kwargs.copy()) โ
โ 324 โ โ else: โ
โ 325 โ โ โ tool_code = get_tool_creation_code(code, self.toolbox, remote=remote) โ
โ 326 โ โ โ return f"{tool_code}\n{code}" โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:61 in evaluate โ
โ โ
โ 58 โ result = None โ
โ 59 โ for idx, node in enumerate(expression.body): โ
โ 60 โ โ try: โ
โ โฑ 61 โ โ โ line_result = evaluate_ast(node, state, tools) โ
โ 62 โ โ except InterpretorError as e: โ
โ 63 โ โ โ msg = f"Evaluation of the code stopped at line {idx} before the end because โ
โ 64 โ โ โ if chat_mode: โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:98 in โ
โ evaluate_ast โ
โ โ
โ 95 โ if isinstance(expression, ast.Assign): โ
โ 96 โ โ # Assignement -> we evaluate the assignement which should update the state โ
โ 97 โ โ # We return the variable assigned as it may be used to determine the final resul โ
โ โฑ 98 โ โ return evaluate_assign(expression, state, tools) โ
โ 99 โ elif isinstance(expression, ast.Call): โ
โ 100 โ โ # Function call -> we return the value of the function call โ
โ 101 โ โ return evaluate_call(expression, state, tools) โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:139 in โ
โ evaluate_assign โ
โ โ
โ 136 โ
โ 137 def evaluate_assign(assign, state, tools): โ
โ 138 โ var_names = assign.targets โ
โ โฑ 139 โ result = evaluate_ast(assign.value, state, tools) โ
โ 140 โ โ
โ 141 โ if len(var_names) == 1: โ
โ 142 โ โ state[var_names[0].id] = result โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:101 in โ
โ evaluate_ast โ
โ โ
โ 98 โ โ return evaluate_assign(expression, state, tools) โ
โ 99 โ elif isinstance(expression, ast.Call): โ
โ 100 โ โ # Function call -> we return the value of the function call โ
โ โฑ 101 โ โ return evaluate_call(expression, state, tools) โ
โ 102 โ elif isinstance(expression, ast.Constant): โ
โ 103 โ โ # Constant -> just return the value โ
โ 104 โ โ return expression.value โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:167 in โ
โ evaluate_call โ
โ โ
โ 164 โ # Todo deal with args โ
โ 165 โ args = [evaluate_ast(arg, state, tools) for arg in call.args] โ
โ 166 โ kwargs = {keyword.arg: evaluate_ast(keyword.value, state, tools) for keyword in call โ
โ โฑ 167 โ return func(*args, **kwargs) โ
โ 168 โ
โ 169 โ
โ 170 def evaluate_subscript(subscript, state, tools): โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/base.py:536 in __call__ โ
โ โ
โ 533 โ โ โ
โ 534 โ โ encoded_inputs = self.encode(*args, **kwargs) โ
โ 535 โ โ encoded_inputs = send_to_device(encoded_inputs, self.device) โ
โ โฑ 536 โ โ outputs = self.forward(encoded_inputs) โ
โ 537 โ โ outputs = send_to_device(outputs, "cpu") โ
โ 538 โ โ return self.decode(outputs) โ
โ 539 โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/image_segmentation.py:52 in forward โ
โ โ
โ 49 โ โ
โ 50 โ def forward(self, inputs): โ
โ 51 โ โ with torch.no_grad(): โ
โ โฑ 52 โ โ โ logits = self.model(**inputs).logits โ
โ 53 โ โ return logits โ
โ 54 โ โ
โ 55 โ def decode(self, outputs): โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in _call_impl โ
โ โ
โ 1498 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ
โ 1499 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ
โ 1500 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ
โ โฑ 1501 โ โ โ return forward_call(*args, **kwargs) โ
โ 1502 โ โ # Do not call functions when jit is used โ
โ 1503 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ
โ 1504 โ โ backward_pre_hooks = [] โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/models/clipseg/modeling_clipseg.py:1426 in โ
โ forward โ
โ โ
โ 1423 โ โ โ
โ 1424 โ โ # step 1: forward the query images through the frozen CLIP vision encoder โ
โ 1425 โ โ with torch.no_grad(): โ
โ โฑ 1426 โ โ โ vision_outputs = self.clip.vision_model( โ
โ 1427 โ โ โ โ pixel_values=pixel_values, โ
โ 1428 โ โ โ โ output_attentions=output_attentions, โ
โ 1429 โ โ โ โ output_hidden_states=True, # we need the intermediate hidden states โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in _call_impl โ
โ โ
โ 1498 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ
โ 1499 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ
โ 1500 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ
โ โฑ 1501 โ โ โ return forward_call(*args, **kwargs) โ
โ 1502 โ โ # Do not call functions when jit is used โ
โ 1503 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ
โ 1504 โ โ backward_pre_hooks = [] โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/models/clipseg/modeling_clipseg.py:867 in โ
โ forward โ
โ โ
โ 864 โ โ if pixel_values is None: โ
โ 865 โ โ โ raise ValueError("You have to specify pixel_values") โ
โ 866 โ โ โ
โ โฑ 867 โ โ hidden_states = self.embeddings(pixel_values) โ
โ 868 โ โ hidden_states = self.pre_layrnorm(hidden_states) โ
โ 869 โ โ โ
โ 870 โ โ encoder_outputs = self.encoder( โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in _call_impl โ
โ โ
โ 1498 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ
โ 1499 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ
โ 1500 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ
โ โฑ 1501 โ โ โ return forward_call(*args, **kwargs) โ
โ 1502 โ โ # Do not call functions when jit is used โ
โ 1503 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ
โ 1504 โ โ backward_pre_hooks = [] โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/models/clipseg/modeling_clipseg.py:215 in โ
โ forward โ
โ โ
โ 212 โ โ โ
โ 213 โ โ if embeddings.shape[1] != self.num_positions: โ
โ 214 โ โ โ new_shape = int(math.sqrt(embeddings.shape[1] - 1)) โ
โ โฑ 215 โ โ โ embeddings = embeddings + self.interpolate_position_embeddings((new_shape, n โ
โ 216 โ โ โ embeddings = embeddings.to(embeddings.dtype) โ
โ 217 โ โ else: โ
โ 218 โ โ โ embeddings = embeddings + self.position_embedding(self.position_ids) โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
RuntimeError: The size of tensor a (3151) must match the size of tensor b (3137) at non-singleton dimension 1
### Expected behavior
To draw a red line around Dart Vader figure.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23328/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23328/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23327
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23327/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23327/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23327/events
|
https://github.com/huggingface/transformers/pull/23327
| 1,707,413,312 |
PR_kwDOCUB6oc5QYGT1
| 23,327 |
Only add files with modification outside doc blocks
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,683 | 1,683 | 1,683 |
COLLABORATOR
| null |
# What does this PR do?
Add files for doctesting only when they have modifications outside docstrings.
(offline message from Sylvain)
> One small improvement I see is for the docstrings: for now the tests are launched on a file if we modify it, but I would only launch it if docstrings are modified (e.g. check the modifications are correct) to go faster. If changes in the code of a model file (for instance) trigger a doctest failure we will see it after.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23327/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23327/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23327",
"html_url": "https://github.com/huggingface/transformers/pull/23327",
"diff_url": "https://github.com/huggingface/transformers/pull/23327.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23327.patch",
"merged_at": 1683902115000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23326
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23326/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23326/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23326/events
|
https://github.com/huggingface/transformers/pull/23326
| 1,707,313,490 |
PR_kwDOCUB6oc5QXwjW
| 23,326 |
Remove `LanguageIdentificationTool` in `__init__.py` as we don't have it yet
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,683 | 1,683 | 1,683 |
COLLABORATOR
| null |
# What does this PR do?
We need to implement it before we can import it :-)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23326/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23326/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23326",
"html_url": "https://github.com/huggingface/transformers/pull/23326",
"diff_url": "https://github.com/huggingface/transformers/pull/23326.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23326.patch",
"merged_at": 1683886281000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23325
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23325/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23325/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23325/events
|
https://github.com/huggingface/transformers/issues/23325
| 1,707,309,406 |
I_kwDOCUB6oc5lw3le
| 23,325 |
resume_from_checkpoint is not used in TrainingArguments
|
{
"login": "SingL3",
"id": 20473466,
"node_id": "MDQ6VXNlcjIwNDczNDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/20473466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SingL3",
"html_url": "https://github.com/SingL3",
"followers_url": "https://api.github.com/users/SingL3/followers",
"following_url": "https://api.github.com/users/SingL3/following{/other_user}",
"gists_url": "https://api.github.com/users/SingL3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SingL3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SingL3/subscriptions",
"organizations_url": "https://api.github.com/users/SingL3/orgs",
"repos_url": "https://api.github.com/users/SingL3/repos",
"events_url": "https://api.github.com/users/SingL3/events{/privacy}",
"received_events_url": "https://api.github.com/users/SingL3/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@SingL3 In the [docs for TrainingArguments](https://huggingface.co/docs/transformers/v4.29.1/en/main_classes/trainer#transformers.TrainingArguments.resume_from_checkpoint), it notes that `resume_from_checkpoint` is intended to be used in your own training/evaluation scripts. In the original PR, [this was discussed and decided upon](https://github.com/huggingface/transformers/pull/11492#discussion_r622339347) in order to remove ambiguity. ",
"I see but it is somehow confusing. Closing this issue."
] | 1,683 | 1,684 | 1,684 |
NONE
| null |
### System Info
N/A
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Just pass a `resume_from_checkpoint` to `TrainingArguments`.
### Expected behavior
I have read the code and I see `resume_from_checkpoint` is used for `Trainer.train(resume_from_checkpoint=...)`, and this setting in `TrainingArguments` is not used. Any reasons?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23325/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23325/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23324
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23324/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23324/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23324/events
|
https://github.com/huggingface/transformers/issues/23324
| 1,707,305,474 |
I_kwDOCUB6oc5lw2oC
| 23,324 |
Support Azure OpenAI in transformer agents
|
{
"login": "huajianmao",
"id": 1352072,
"node_id": "MDQ6VXNlcjEzNTIwNzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1352072?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/huajianmao",
"html_url": "https://github.com/huajianmao",
"followers_url": "https://api.github.com/users/huajianmao/followers",
"following_url": "https://api.github.com/users/huajianmao/following{/other_user}",
"gists_url": "https://api.github.com/users/huajianmao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/huajianmao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/huajianmao/subscriptions",
"organizations_url": "https://api.github.com/users/huajianmao/orgs",
"repos_url": "https://api.github.com/users/huajianmao/repos",
"events_url": "https://api.github.com/users/huajianmao/events{/privacy}",
"received_events_url": "https://api.github.com/users/huajianmao/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@huajianmao thanks for opening this issue, \r\n\r\nAs a user can enable azure open AI support by adding these lines to their own code, there isn't a requirement to add support in the library. ",
"> @huajianmao thanks for opening this issue,\r\n> \r\n> As a user can enable azure open AI support by adding these lines to their own code, there isn't a requirement to add support in the library.\r\n\r\nThat's not entirely correct. You'll get an InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class \r\n'openai.api_resources.completion.Completion'>\r\n\r\nBecause you would have to support some way of specifying the Azure-deployment_id in your code.",
"https://huggingface.co/docs/transformers/v4.29.1/en/main_classes/agent#transformers.OpenAiAgent\r\nagent = OpenAiAgent(model=\"text-davinci-003\", api_key=pswd)\r\nideally would be changed. The Azure API is a different type and different enough to allow the relevant parameters to be passed in.\r\nIt doesn't sound \"clean\" to me to access the openai API parameters directly outside the OpenAiAgent.\r\nThe Azure based version is rapidly gaining in importance as people can use their corporate or personal Azure credits vs having to pay OpenAI separately.\r\n\r\nAlso the call needs to be changed to something like that currently:\r\n result = openai.Completion.create(\r\n #model=self.model,\r\n deployment_id=self.model,\r\n prompt=prompts,\r\n temperature=0,\r\n stop=stop,\r\n max_tokens=200,\r\n )\r\n\r\nbtw. something to easily stumble over that the version is api_version=\"2022-12-01\" even though the portal shows \"1\". Best is to go to the playground and have it generate example code currently it seems.",
"for GPT4 you need to use \r\nopenai.api_version = \"2023-03-15-preview\" # used for GPT-4 - see https://learn.microsoft.com/en-gb/azure/cognitive-services/openai/how-to/chatgpt?pivots=programming-language-chat-completions\r\n\r\nBe aware the chat message pattern changed nd as well the json for the response, which is the new mode for all new models after GPT4 as well.",
"Sure, but we were not talking about GPT-4. The examples only covered DaVinci-003",
"Anyway the api is changed - no matter what model you use - you need to adopt to the new prompt style."
] | 1,683 | 1,686 | 1,686 |
NONE
| null |
### Feature request
Support it by setting the openai properties.
``` python
openai.api_type = "azure"
openai.api_version = "2023-03-15-preview"
openai.api_base = "https://THE_NAME.openai.azure.com/"
openai.api_key = "AZURE_OPENAI_API_KEY"
# response = openai.Completion.create(engine="xxxx", ...)
```
### Motivation
Azure OpenAI is also used by a lot of people.
It would be great to support it in the transformer agents.
### Your contribution
I may contribute to test it.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23324/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23324/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23323
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23323/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23323/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23323/events
|
https://github.com/huggingface/transformers/issues/23323
| 1,707,301,811 |
I_kwDOCUB6oc5lw1uz
| 23,323 |
no dependency package `accelerate` installed when we install transformers v4.29.1
|
{
"login": "PenghuiCheng",
"id": 42089598,
"node_id": "MDQ6VXNlcjQyMDg5NTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/42089598?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PenghuiCheng",
"html_url": "https://github.com/PenghuiCheng",
"followers_url": "https://api.github.com/users/PenghuiCheng/followers",
"following_url": "https://api.github.com/users/PenghuiCheng/following{/other_user}",
"gists_url": "https://api.github.com/users/PenghuiCheng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PenghuiCheng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PenghuiCheng/subscriptions",
"organizations_url": "https://api.github.com/users/PenghuiCheng/orgs",
"repos_url": "https://api.github.com/users/PenghuiCheng/repos",
"events_url": "https://api.github.com/users/PenghuiCheng/events{/privacy}",
"received_events_url": "https://api.github.com/users/PenghuiCheng/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @PenghuiCheng, \r\n\r\nAll of the examples have their own unique requirements, which are listed in [their own `requirements.txt` file](https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/requirements.txt). The examples are demonstrative of how to perform certain tasks using the transformers library, but are not dependancies.",
"@PenghuiCheng you need to do `pip install transformers[torch]` to ensure you're building/installing the right version",
"same error ! any solution",
"@mohamedoh you need to do `pip install accelerate`, or `pip install transformers[torch]`",
"I face the same issue even after the installations",
"@flckv if you're in a notebook or similar you'll need to restart the session. Does `pip show accelerate` show anything? (This is a sign)",
"> \r\n`pip show accelerate`\r\nIt shows version 0.19.0 but still getting the error \r\nImportError: Using the `Trainer` with `PyTorch` requires `accelerate`: Run `pip install --upgrade accelerate` \r\non both Colab as well as Jupyter",
"@Krish1375 you may need to restart the notebook session to use the new/installed lib",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I have the same issue but apparently, there is no solution for it. \r\n",
"Did exactly what was written on colab.research:\r\nhttps://colab.research.google.com/drive/1jCkpikz0J2o20FBQmYmAGdiKmJGOMo-o?usp=sharing#scrollTo=cg3fiQOvmI3Q\r\n\r\nran the first cell, was ok.\r\nran the 2nd cell, got this error message:\r\nImportError: Using `load_in_8bit=True` requires Accelerate: `pip install accelerate` and the latest version of bitsandbytes `pip install -i https://test.pypi.org/simple/ bitsandbytes` or pip install bitsandbytes` \r\n\r\n",
"> Did exactly what was written on colab.research: https://colab.research.google.com/drive/1jCkpikz0J2o20FBQmYmAGdiKmJGOMo-o?usp=sharing#scrollTo=cg3fiQOvmI3Q\r\n> \r\n> ran the first cell, was ok. ran the 2nd cell, got this error message: ImportError: Using `load_in_8bit=True` requires Accelerate: `pip install accelerate` and the latest version of bitsandbytes `pip install -i https://test.pypi.org/simple/ bitsandbytes` or pip install bitsandbytes`\r\n\r\n\r\nGot the same problem as well, any solution ?",
"I have same issue, nothing worked for me and accelerate is never detected by transformers.\r\nEdit: I suggest downgrading to transformers 4.30.1 and accelerate 0.21.0. It worked for me and I will wait they fix the dependencies",
"> > Did exactly what was written on colab.research: https://colab.research.google.com/drive/1jCkpikz0J2o20FBQmYmAGdiKmJGOMo-o?usp=sharing#scrollTo=cg3fiQOvmI3Q\r\n> > ran the first cell, was ok. ran the 2nd cell, got this error message: ImportError: Using `load_in_8bit=True` requires Accelerate: `pip install accelerate` and the latest version of bitsandbytes `pip install -i https://test.pypi.org/simple/ bitsandbytes` or pip install bitsandbytes`\r\n> \r\n> Got the same problem as well, any solution ?\r\n\r\nMy bad, just realized I did not run on GPU, had to switch to GPU mode in collab settings. Now worked. Check your environnment and be sure you are on aGPU (CPU by default).\r\nGood luck all.\r\n\r\n"
] | 1,683 | 1,692 | 1,688 |
NONE
| null |
### System Info
transformers v4.29.1
torch 2.0.1
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
cd examples/pytorch/text-classification
export TASK_NAME=mrpc
python run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME
the log is:
Traceback (most recent call last):
File "/home/penghuic/transformers/examples/pytorch/text-classification/run_glue.py", line 623, in <module>
main()
File "/home/penghuic/transformers/examples/pytorch/text-classification/run_glue.py", line 217, in main
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
File "/home/penghuic/transformers/src/transformers/hf_argparser.py", line 346, in parse_args_into_dataclasses
obj = dtype(**inputs)
File "<string>", line 111, in __init__
File "/home/penghuic/transformers/src/transformers/training_args.py", line 1333, in __post_init__
and (self.device.type != "cuda")
File "/home/penghuic/transformers/src/transformers/training_args.py", line 1697, in device
return self._setup_devices
File "/home/penghuic/transformers/src/transformers/utils/generic.py", line 54, in __get__
cached = self.fget(obj)
File "/home/penghuic/transformers/src/transformers/training_args.py", line 1613, in _setup_devices
raise ImportError(
ImportError: Using the `Trainer` with `PyTorch` requires `accelerate`: Run `pip install --upgrade accelerate`
### Expected behavior
we expect the dependency package `accelerate` will be installed when we install transformers.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23323/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23323/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23322
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23322/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23322/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23322/events
|
https://github.com/huggingface/transformers/issues/23322
| 1,707,252,723 |
I_kwDOCUB6oc5lwpvz
| 23,322 |
TokenClassification Pipeline not aggregating entities correctly
|
{
"login": "neilkimn",
"id": 37108154,
"node_id": "MDQ6VXNlcjM3MTA4MTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/37108154?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neilkimn",
"html_url": "https://github.com/neilkimn",
"followers_url": "https://api.github.com/users/neilkimn/followers",
"following_url": "https://api.github.com/users/neilkimn/following{/other_user}",
"gists_url": "https://api.github.com/users/neilkimn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neilkimn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neilkimn/subscriptions",
"organizations_url": "https://api.github.com/users/neilkimn/orgs",
"repos_url": "https://api.github.com/users/neilkimn/repos",
"events_url": "https://api.github.com/users/neilkimn/events{/privacy}",
"received_events_url": "https://api.github.com/users/neilkimn/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hello @neilkimn,\r\n\r\nI haven't looked at the code in detail but I think the error is not from the pipeline itself but from a wrong prediction of the model. I guess you are using IOB format for your labels and maybe the last digit was predicted with `B-TOTAL` and not `I-TOTAL` which end up creating a new entity for only one digit.\r\n\r\nThis behavior is common, which is why there are different aggregation strategies. Changing your aggregation strategy from `simple` to `first` to calculate `pre_entities` should solve your problem.",
"Hi @luccailliau, thanks for the swift reply. You're right about the prediction for `TOTAL` isn't comprised of the correct IOB format. Here's the output when using no aggregation strategy:\r\n\r\n```python\r\n{'entity': 'B-TOTAL', 'score': 0.99910754, 'index': 280, 'word': 'โ5.', 'start': 0, 'end': 2}\r\n{'entity': 'B-TOTAL', 'score': 0.9981998, 'index': 281, 'word': '97', 'start': 2, 'end': 4}\r\n{'entity': 'B-TOTAL', 'score': 0.9978011, 'index': 282, 'word': '5,7', 'start': 4, 'end': 7}\r\n{'entity': 'B-TOTAL', 'score': 0.9913623, 'index': 283, 'word': '4', 'start': 7, 'end': 8}\r\n```\r\n\r\nApplying aggregation strategies yields:\r\n```python\r\n# simple\r\n{'entity_group': 'TOTAL', 'score': 0.99910754, 'word': '5.', 'start': 0, 'end': 2}\r\n{'entity_group': 'TOTAL', 'score': 0.9981998, 'word': '97', 'start': 2, 'end': 4}\r\n{'entity_group': 'TOTAL', 'score': 0.9978011, 'word': '5,7', 'start': 4, 'end': 7}\r\n{'entity_group': 'TOTAL', 'score': 0.9913623, 'word': '4', 'start': 7, 'end': 8}\r\n\r\n# first\r\n{'entity_group': 'TOTAL', 'score': 0.99910754, 'word': '5.975,7', 'start': 0, 'end': 7}\r\n{'entity_group': 'TOTAL', 'score': 0.9913623, 'word': '4', 'start': 7, 'end': 8}\r\n\r\n# average\r\n{'entity_group': 'TOTAL', 'score': 0.99836946, 'word': '5.975,7', 'start': 0, 'end': 7}\r\n{'entity_group': 'TOTAL', 'score': 0.9913623, 'word': '4', 'start': 7, 'end': 8}\r\n\r\n# max\r\n{'entity_group': 'TOTAL', 'score': 0.99910754, 'word': '5.975,7', 'start': 0, 'end': 7}\r\n{'entity_group': 'TOTAL', 'score': 0.9913623, 'word': '4', 'start': 7, 'end': 8}\r\n```\r\n\r\nAnd I am confident the issue is due to the heuristic using the `start_ind` of the subword `offset_mapping` and subsequently indexing into `sentence`. Backtracking through the callstack, I could verify that the `sentence` variable contained the **full** input sentence, and it is only coincidental that `\" \" not in sentence[start_ind - 1 : start_ind + 1]` yields False, ultimately setting `is_subword = False` for the word '4', even though it is a subword.",
"@neilkimn,\r\n\r\nYou're using `sentence = \" \".join(dataset[\"test\"][0][\"words\"])` to generate a sentence from a list of words (or subwords). This is not a problem but the original `offset_mapping` with `offset_mapping=sample_input[\"offset_mapping\"][0]` won't match with the sentence created with `\" \".join()`. I am pretty sure that something like this is happening:\r\n\r\nI think the easiest solution for your problem is a loop that merges entities if `entities[i][\"end\"] == entities[i+1][\"start\"]` or (not a beautiful solution) tokenize the initial sentence to generate tokens, then create a new sentence with `\" \".join(tokens)` and finally tokenize this new sentence to have `offset_mapping` aligned with `sentence`.\r\n\r\n",
" Thanks for clarifying @luccailliau, that explains why the `offset_mapping` is different for my example. I guess the issue is propagated from how the `LayoutXLMProcessor` calls `LayoutXLMTokenizerFast` which I am using. Supplying the processor with both the tokens joined together as well as their original split representation resolves it."
] | 1,683 | 1,683 | 1,683 |
NONE
| null |
### System Info
I am running transformers==4.27.3 but I believe the issue persists in the latest version as the issue at hand is specific to the `gather_pre_entities` function https://github.com/huggingface/transformers/blob/v4.27.3/src/transformers/pipelines/token_classification.py#L281.
- `transformers` version: 4.27.3
- Platform: macOS-13.3.1-arm64-arm-64bit
- Python version: 3.10.10
- Huggingface_hub version: 0.13.3
- PyTorch version (GPU?): 2.0.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes, MPS on M1 Mac
- Using distributed or parallel set-up in script?: No
### Who can help?
Tagging contributors who have committed to the TokenClassification Pipeline lately: @luccailliau @Narsil @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The code below shows the main details of the issue. I want to use the aggregation strategies of the TokenClassification Pipeline, and due to using the LayoutLM model and tokenizer, the aggregation of subwords falls back to the heuristic implemented for the `gather_pre_entities` function of `TokenClassificationPipeline`. This should be fine, however I am experiencing cases where tokens are not properly merged, as shown in the example output below. In the original sentence string, I have a bunch of words, where the following snippet is of interest: `"... I alt DKK inkl. moms 5.975,74 Betalingsbetingelser: KONTANT ..."`. The model correctly predicts the entity, `TOTAL`, but is missing the last digit, 4, which gets grouped to its own `TOTAL`-entity prediction.
```python
# Omitted a bunch of boilerplate code including model definition, setting up dataset, etc.
pipe = TokenClassificationPipeline(model=model, tokenizer=tokenizer)
sample_output = model.forward(
input_ids=sample_input["input_ids"].type(torch.long),
bbox=sample_input["bbox"].type(torch.int32),
image=torch.stack(sample_input["image"]),
attention_mask=sample_input["attention_mask"],
)
sample_scores = sample_output["logits"][0].cpu().detach().numpy()
pre_entities = pipe.gather_pre_entities(
sentence = " ".join(dataset["test"][0]["words"]),
input_ids=sample_input["input_ids"][0],
scores = sample_scores,
offset_mapping=sample_input["offset_mapping"][0],
special_tokens_mask=sample_input["special_tokens_mask"][0].cpu().detach().numpy(),
aggregation_strategy="simple"
) # throws UserWarning: "Tokenizer does not support real words, using fallback heuristic"
grouped_entities = pipe.aggregate(pre_entities, aggregation_strategy="first")
for ent in grouped_entities:
print(ent)
>>> {'entity_group': 'O', 'score': 11.709729, 'word': [... long sentence ...], 'start': 0, 'end': 11}
>>> [... some other predicted entities ...]
>>> {'entity_group': 'TOTAL', 'score': 8.98903, 'word': '5.975,7', 'start': 0, 'end': 7}
>>> {'entity_group': 'TOTAL', 'score': 6.8310637, 'word': '4', 'start': 7, 'end': 8}
>>> {'entity_group': 'O', 'score': 11.587039, 'word': [... long sentence ...], 'start': 0, 'end': 23}
```
### Expected behavior
Diving into the `gather_pre_entities` function, I see that the heuristic uses the `is_subword` boolean to determine how subwords should be aggregated to a combined word, with a corresponding, merged entity. Specifically, the heuristic uses the following rule `is_subword = start_ind > 0 and " " not in sentence[start_ind - 1 : start_ind + 1]`, where if I comment out the second part of the conditional, results in the entity being correctly merged, i.e. `{'entity_group': 'TOTAL', 'score': 8.98903, 'word': '5.975,74', 'start': 0, 'end': 8}`.
```python
else:
# This is a fallback heuristic. This will fail most likely on any kind of text + punctuation mixtures that will be considered "words". Non word aware models cannot do better than this unfortunately.
if aggregation_strategy in {
AggregationStrategy.FIRST,
AggregationStrategy.AVERAGE,
AggregationStrategy.MAX,
}:
warnings.warn("Tokenizer does not support real words, using fallback heuristic", UserWarning)
is_subword = start_ind > 0 and " " not in sentence[start_ind - 1 : start_ind + 1]
```
Since the `start_ind` of the subword is relative to the entire, original word that the subword is part of composing, why does the heuristic then depend on indexing into the entire `sentence` string? These indices, coming from the `offset_mapping` will always be relative to the word and most often range from 0-10 and so forth, depending on the word length. Without understanding the full reason behind why " " would constitute a subword, I am certain that this must be a bug. Even if the start and end indices from `offset_mapping` were relative to the entire sentence, how could you then determine when a new word is starting?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23322/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23322/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23321
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23321/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23321/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23321/events
|
https://github.com/huggingface/transformers/pull/23321
| 1,707,219,850 |
PR_kwDOCUB6oc5QXcjS
| 23,321 |
Fix docker image
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,683 | 1,683 | 1,683 |
COLLABORATOR
| null |
# What does this PR do?
Due to the requirements from other packages, we turns out to get `tensorflow-text==2.11` and cause CI fails from the beginning when we have `tensorflow==2.12`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23321/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23321/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23321",
"html_url": "https://github.com/huggingface/transformers/pull/23321",
"diff_url": "https://github.com/huggingface/transformers/pull/23321.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23321.patch",
"merged_at": 1683891458000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23320
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23320/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23320/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23320/events
|
https://github.com/huggingface/transformers/pull/23320
| 1,706,918,920 |
PR_kwDOCUB6oc5QWcJF
| 23,320 |
Why crash the whole run when HFHub gives a 50x error?
|
{
"login": "ropoctl",
"id": 1702854,
"node_id": "MDQ6VXNlcjE3MDI4NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1702854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ropoctl",
"html_url": "https://github.com/ropoctl",
"followers_url": "https://api.github.com/users/ropoctl/followers",
"following_url": "https://api.github.com/users/ropoctl/following{/other_user}",
"gists_url": "https://api.github.com/users/ropoctl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ropoctl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ropoctl/subscriptions",
"organizations_url": "https://api.github.com/users/ropoctl/orgs",
"repos_url": "https://api.github.com/users/ropoctl/repos",
"events_url": "https://api.github.com/users/ropoctl/events{/privacy}",
"received_events_url": "https://api.github.com/users/ropoctl/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @sgugger ",
"> Thanks for opening a PR. Can you move this inside `_push_from_checkpoint` in the `try`/`finally` block?\r\n\r\nDone",
"> Thanks! Can you just run a quick `make style` on your branch to fix the quality issue?\r\n\r\nDone"
] | 1,683 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
Logging an error and continuing is probably following the principle of least surprise.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23320/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23320/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23320",
"html_url": "https://github.com/huggingface/transformers/pull/23320",
"diff_url": "https://github.com/huggingface/transformers/pull/23320.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23320.patch",
"merged_at": 1684266414000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23319
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23319/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23319/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23319/events
|
https://github.com/huggingface/transformers/pull/23319
| 1,706,909,211 |
PR_kwDOCUB6oc5QWaGm
| 23,319 |
[Reland] search model buffers for dtype as the last resort
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,683 | 1,686 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
PR #23159 was reverted due to broken tests. However, I still feel the need to check buffers for dtype when the module was frozen by changing parameters to buffers. But now we search buffers as the last resort. At this point, the old code would raise an exception because it tries to deference None as a tuple, so we can safely insert more checks without breaking the current behavior. The old PR fails because the buffer dtype is returned before checking module.\_\_dict\_\_ , which breaks backward compatibility.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23319/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23319/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23319",
"html_url": "https://github.com/huggingface/transformers/pull/23319",
"diff_url": "https://github.com/huggingface/transformers/pull/23319.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23319.patch",
"merged_at": 1684328708000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23318
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23318/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23318/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23318/events
|
https://github.com/huggingface/transformers/pull/23318
| 1,706,898,031 |
PR_kwDOCUB6oc5QWXvC
| 23,318 |
Add Multimodal heading and Document question answering in task_summary.mdx
|
{
"login": "y3sar",
"id": 16244698,
"node_id": "MDQ6VXNlcjE2MjQ0Njk4",
"avatar_url": "https://avatars.githubusercontent.com/u/16244698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/y3sar",
"html_url": "https://github.com/y3sar",
"followers_url": "https://api.github.com/users/y3sar/followers",
"following_url": "https://api.github.com/users/y3sar/following{/other_user}",
"gists_url": "https://api.github.com/users/y3sar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/y3sar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/y3sar/subscriptions",
"organizations_url": "https://api.github.com/users/y3sar/orgs",
"repos_url": "https://api.github.com/users/y3sar/repos",
"events_url": "https://api.github.com/users/y3sar/events{/privacy}",
"received_events_url": "https://api.github.com/users/y3sar/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@y3sar - The current CI tests are failing because this PR triggers tests checking code snippets in the docs, and the doctest CI environment doesn't have ffmpeg installed. I've opened a PR #23329 to resolve this. Once I've confirmed everything works, we can re-run for this PR and if all green then merge :) ",
"@y3sar Sorry for the delay in this. There's been a lot of changes in how doctests are retrieved and run. Could you rebase onto main to get the latest updates? ",
"@amyeroberts sure thing I'll rebase and commit again",
"@y3sar Did you force push after rebasing the branch? The current commit history looks like what I get if I push but don't force. ",
"@amyeroberts no I did not force push. I rebased and pull pushed. What should I do to solve this problem?",
"@y3sar To rebase onto main you need to force push, as rebasing is a form of history rewrite. The steps are - running on this branch: \r\n\r\n* Get the most recent version of main: `git fetch upstream main`\r\n* Rebase: `git rebase upstream/main`\r\n* Push changes: `git push -f` \r\n\r\n",
"@amyeroberts ffmpeg bug still remains. What can I do to solve this?\r\n",
"@y3sar You'll want to modify [this line](https://github.com/huggingface/transformers/blob/860d11ff7c4235e0baaeee50d96cf1686781bdd3/.circleci/create_circleci_config.py#LL455C14-L455C15) to: \r\n\r\n```python\r\n\"sudo apt-get -y update && sudo apt-get install -y libsndfile1-dev espeak-ng time ffmpeg\",\r\n```",
"@amyeroberts should I commit in this branch or should I create another pull request?",
"@y3sar Commit on this branch :) ",
"@y3sar Apologies for the delay with this PR. Could you rebase again and push. We've been having issues with timeouts on the CI which should now be resolved. Additionally, could you update the extension for the `.mdx` to `.md` please? ",
"@amyeroberts thank you for remembering this pull request ๐๐๐๐\nYes ma'am.",
"@amyeroberts looks like the timeout issue remains. Should I change the code example?",
"@y3sar Let's draft in help from the king of tests @ydshieh :) \r\n\r\nI did a test run of this example on a CPU and it took 40s, so not fats but not super slow either. I think we could try:\r\n* Another checkpoint. Are there any other smaller DocQA checkpoints we could use? \r\n* Forcing this example to be skipped in the tests ",
"Let me see what's happening on the CI runner.",
"Hi, I checked. That `task_summary.md` is considered as a single test by `pytest` , but it has multiple examples (and the checkpoints are not always small).\r\n\r\nI can change the environment variable to avoid this situation. Once that change is merged, you can rebase and the CI should be green.",
"The PR is opened \r\n\r\nhttps://github.com/huggingface/transformers/pull/24753",
"That PR is merged into `main`. If you pull the latest `main` and rebase on it, we should be good to merge this PR.",
"Thank you the king of tests and @amyeroberts. Should I check for a smaller checkpoint?",
"> Should I check for a smaller checkpoint?\r\n\r\nIt would be always great to use a small(er) checkpoint for testing (if there is any) ๐ Thank you @y3sar \r\n",
"@ydshieh @amyeroberts I have found some small(er) models. The model that is being used currently is 803 mb. I have found a [checkpoint](https://huggingface.co/magorshunov/layoutlm-invoices) that is 500 mb. Which is still very big but smaller. And also I have found an even smaller [checkpoint](hf-tiny-model-private/tiny-random-LayoutLMForQuestionAnswering) that is used for testing by the huggingface internals. But the results are not that reliable.",
"@y3sar Thanks for looking into this! The tiny models are just for internal testing and probably not something we want to have in the docs (we want to be able to adapt to our needs without considering breaking changes). I'd say go for the 500MB one :) ",
"@y3sar Thanks again for iterating and adding this! "
] | 1,683 | 1,689 | 1,689 |
CONTRIBUTOR
| null |
# What does this PR do?
From #18926
This PR creates a new Multimodal heading in task_summary and add Document question answering task example inside of it.
# Who can review?
@stevhliu
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23318/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23318/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23318",
"html_url": "https://github.com/huggingface/transformers/pull/23318",
"diff_url": "https://github.com/huggingface/transformers/pull/23318.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23318.patch",
"merged_at": 1689598280000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23317
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23317/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23317/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23317/events
|
https://github.com/huggingface/transformers/pull/23317
| 1,706,862,868 |
PR_kwDOCUB6oc5QWQDH
| 23,317 |
fix gptj could not jit.trace in GPU
|
{
"login": "sywangyi",
"id": 36058628,
"node_id": "MDQ6VXNlcjM2MDU4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sywangyi",
"html_url": "https://github.com/sywangyi",
"followers_url": "https://api.github.com/users/sywangyi/followers",
"following_url": "https://api.github.com/users/sywangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions",
"organizations_url": "https://api.github.com/users/sywangyi/orgs",
"repos_url": "https://api.github.com/users/sywangyi/repos",
"events_url": "https://api.github.com/users/sywangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sywangyi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@yao-matrix",
"_The documentation is not available anymore as the PR was closed or merged._",
"@michaelbenayoun Could you have a second look please?",
"LGTM!\r\n"
] | 1,683 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
fix the jit failure issue for gptj
Fixes # (issue)
jit.trace for gptj fail in RTX8000
error like
ERROR: Tensor-valued Constant nodes differed in value across invocations. This often indicates that the tracer has encountered untraceable code.
Node:
%192 : Tensor = prim::Constant[value=<Tensor>](), scope: __module.transformer/__module.transformer.h.0/__module.transformer.h.0.attn # /skyrex01/wangyi/transformers/src/transformers/models/gptj/modeling_gptj.py:190:0
Source Location:
/skyrex01/wangyi/transformers/src/transformers/models/gptj/modeling_gptj.py(190): _get_embed_positions
/skyrex01/wangyi/transformers/src/transformers/models/gptj/modeling_gptj.py(220): forward
/skyrex05/wangyi/miniconda3/envs/deepspeed/lib/python3.9/site-packages/torch/nn/modules/module.py(1488): _slow_forward
/skyrex05/wangyi/miniconda3/envs/deepspeed/lib/python3.9/site-packages/torch/nn/modules/module.py(1501): _call_impl
/skyrex01/wangyi/transformers/src/transformers/models/gptj/modeling_gptj.py(309): forward
/skyrex05/wangyi/miniconda3/envs/deepspeed/lib/python3.9/site-packages/torch/nn/modules/module.py(1488): _slow_forward
/skyrex05/wangyi/miniconda3/envs/deepspeed/lib/python3.9/site-packages/torch/nn/modules/module.py(1501): _call_impl
/skyrex01/wangyi/transformers/src/transformers/models/gptj/modeling_gptj.py(688): forward
/skyrex05/wangyi/miniconda3/envs/deepspeed/lib/python3.9/site-packages/torch/nn/modules/module.py(1488): _slow_forward
/skyrex05/wangyi/miniconda3/envs/deepspeed/lib/python3.9/site-packages/torch/nn/modules/module.py(1501): _call_impl
/skyrex01/wangyi/transformers/src/transformers/models/gptj/modeling_gptj.py(853): forward
/skyrex05/wangyi/miniconda3/envs/deepspeed/lib/python3.9/site-packages/torch/nn/modules/module.py(1488): _slow_forward
/skyrex05/wangyi/miniconda3/envs/deepspeed/lib/python3.9/site-packages/torch/nn/modules/module.py(1501): _call_impl
/skyrex05/wangyi/miniconda3/envs/deepspeed/lib/python3.9/site-packages/torch/jit/_trace.py(1056): trace_module
/skyrex05/wangyi/miniconda3/envs/deepspeed/lib/python3.9/site-packages/torch/jit/_trace.py(794): trace
/skyrex01/wangyi/transformers/examples/pytorch/text-generation/run_generation.py(412): main
/skyrex01/wangyi/transformers/examples/pytorch/text-generation/run_generation.py(458): <module>
Comparison exception: The values for attribute 'device' do not match: cpu != cuda:0.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23317/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23317/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23317",
"html_url": "https://github.com/huggingface/transformers/pull/23317",
"diff_url": "https://github.com/huggingface/transformers/pull/23317.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23317.patch",
"merged_at": 1684932512000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23316
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23316/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23316/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23316/events
|
https://github.com/huggingface/transformers/issues/23316
| 1,706,856,566 |
I_kwDOCUB6oc5lvJB2
| 23,316 |
Using trainer with deepspeed, the program hang on
|
{
"login": "zyh3826",
"id": 31238754,
"node_id": "MDQ6VXNlcjMxMjM4NzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/31238754?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zyh3826",
"html_url": "https://github.com/zyh3826",
"followers_url": "https://api.github.com/users/zyh3826/followers",
"following_url": "https://api.github.com/users/zyh3826/following{/other_user}",
"gists_url": "https://api.github.com/users/zyh3826/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zyh3826/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zyh3826/subscriptions",
"organizations_url": "https://api.github.com/users/zyh3826/orgs",
"repos_url": "https://api.github.com/users/zyh3826/repos",
"events_url": "https://api.github.com/users/zyh3826/events{/privacy}",
"received_events_url": "https://api.github.com/users/zyh3826/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I know why this happened, I set `group_by_length=True`, and since my dataset is very huge, this process is very slow. "
] | 1,683 | 1,683 | 1,683 |
NONE
| null |
### System Info
transformers: 4.26.1
deepspeed: 0.9.1
python: 3.8.0
platform: Ubuntu 18.04
pytorch: 1.12.0
tensorflow: 2.3.1
CUDA Version: 11.4
Driver Version: 470.82.01
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Finetune Wav2Vec2๏ผjust like [this](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py)
the only difference is using `DeepSpeed`
the TrainingArguments:
```python
training_args = TrainingArguments(
output_dir=output_dir,
group_by_length=True,
per_device_train_batch_size=4,
evaluation_strategy='epoch',
save_strategy='epoch',
num_train_epochs=1,
fp16=True,
do_eval=False,
do_train=True,
gradient_checkpointing=True,
gradient_accumulation_steps=16,
logging_steps=50,
learning_rate=1e-4,
weight_decay=0.005,
warmup_steps=1000,
save_total_limit=2,
seed=seed,
remove_unused_columns=False,
local_rank=-1,
deepspeed='./ds_config_zero2.json'
)
```
the ds_config_zero2.json is copied from transformers, it looks like this:
```json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 100,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
everything is OK before training, but when reach the training step, the program hangs on for several hours, anyone can help me, thanks a lot.

### Expected behavior
I think the corrected behavior is starting training not hang on
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23316/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23316/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23315
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23315/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23315/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23315/events
|
https://github.com/huggingface/transformers/issues/23315
| 1,706,583,718 |
I_kwDOCUB6oc5luGam
| 23,315 |
Remote text-to-image tool is down
|
{
"login": "freddyaboulton",
"id": 41651716,
"node_id": "MDQ6VXNlcjQxNjUxNzE2",
"avatar_url": "https://avatars.githubusercontent.com/u/41651716?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/freddyaboulton",
"html_url": "https://github.com/freddyaboulton",
"followers_url": "https://api.github.com/users/freddyaboulton/followers",
"following_url": "https://api.github.com/users/freddyaboulton/following{/other_user}",
"gists_url": "https://api.github.com/users/freddyaboulton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/freddyaboulton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/freddyaboulton/subscriptions",
"organizations_url": "https://api.github.com/users/freddyaboulton/orgs",
"repos_url": "https://api.github.com/users/freddyaboulton/repos",
"events_url": "https://api.github.com/users/freddyaboulton/events{/privacy}",
"received_events_url": "https://api.github.com/users/freddyaboulton/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I still encouter the same Error.",
"Thanks a lot for the reports! It should be fixed now.",
"Thanks for the fix @LysandreJik !"
] | 1,683 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.29.0
- Platform: macOS-12.2.1-arm64-arm-64bit
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@LysandreJik
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import HfAgent
agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder")
img3 = agent.run("Please generate an image of a rabbit wearing a space suit", remote=True)
```
```
==Explanation from the agent==
I will use the following tool: `image_generator` to generate an image.
==Code generated by the agent==
prompt = "rabbit wearing a space suit"
image = image_generator(prompt)
==Result==
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ in <module>:2 โ
โ โ
โ 1 agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder") โ
โ โฑ 2 img3 = agent.run("Please generate an image of a rabbit wearing a space suit", remote=Tru โ
โ 3 โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/agents โ
โ .py:323 in run โ
โ โ
โ 320 โ โ if not return_code: โ
โ 321 โ โ โ print("\n\n==Result==") โ
โ 322 โ โ โ self.cached_tools = resolve_tools(code, self.toolbox, remote=remote, cached_ โ
โ โฑ 323 โ โ โ return evaluate(code, self.cached_tools, state=kwargs.copy()) โ
โ 324 โ โ else: โ
โ 325 โ โ โ tool_code = get_tool_creation_code(code, self.toolbox, remote=remote) โ
โ 326 โ โ โ return f"{tool_code}\n{code}" โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/python โ
โ _interpreter.py:61 in evaluate โ
โ โ
โ 58 โ result = None โ
โ 59 โ for idx, node in enumerate(expression.body): โ
โ 60 โ โ try: โ
โ โฑ 61 โ โ โ line_result = evaluate_ast(node, state, tools) โ
โ 62 โ โ except InterpretorError as e: โ
โ 63 โ โ โ msg = f"Evaluation of the code stopped at line {idx} before the end because โ
โ 64 โ โ โ if chat_mode: โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/python โ
โ _interpreter.py:98 in evaluate_ast โ
โ โ
โ 95 โ if isinstance(expression, ast.Assign): โ
โ 96 โ โ # Assignement -> we evaluate the assignement which should update the state โ
โ 97 โ โ # We return the variable assigned as it may be used to determine the final resul โ
โ โฑ 98 โ โ return evaluate_assign(expression, state, tools) โ
โ 99 โ elif isinstance(expression, ast.Call): โ
โ 100 โ โ # Function call -> we return the value of the function call โ
โ 101 โ โ return evaluate_call(expression, state, tools) โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/python โ
โ _interpreter.py:139 in evaluate_assign โ
โ โ
โ 136 โ
โ 137 def evaluate_assign(assign, state, tools): โ
โ 138 โ var_names = assign.targets โ
โ โฑ 139 โ result = evaluate_ast(assign.value, state, tools) โ
โ 140 โ โ
โ 141 โ if len(var_names) == 1: โ
โ 142 โ โ state[var_names[0].id] = result โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/python โ
โ _interpreter.py:101 in evaluate_ast โ
โ โ
โ 98 โ โ return evaluate_assign(expression, state, tools) โ
โ 99 โ elif isinstance(expression, ast.Call): โ
โ 100 โ โ # Function call -> we return the value of the function call โ
โ โฑ 101 โ โ return evaluate_call(expression, state, tools) โ
โ 102 โ elif isinstance(expression, ast.Constant): โ
โ 103 โ โ # Constant -> just return the value โ
โ 104 โ โ return expression.value โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/python โ
โ _interpreter.py:167 in evaluate_call โ
โ โ
โ 164 โ # Todo deal with args โ
โ 165 โ args = [evaluate_ast(arg, state, tools) for arg in call.args] โ
โ 166 โ kwargs = {keyword.arg: evaluate_ast(keyword.value, state, tools) for keyword in call โ
โ โฑ 167 โ return func(*args, **kwargs) โ
โ 168 โ
โ 169 โ
โ 170 def evaluate_subscript(subscript, state, tools): โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/base.p โ
โ y:399 in __call__ โ
โ โ
โ 396 โ โ output_image = self.tool_class is not None and self.tool_class.outputs == ["imag โ
โ 397 โ โ inputs = self.prepare_inputs(*args, **kwargs) โ
โ 398 โ โ if isinstance(inputs, dict): โ
โ โฑ 399 โ โ โ outputs = self.client(**inputs, output_image=output_image) โ
โ 400 โ โ else: โ
โ 401 โ โ โ outputs = self.client(inputs, output_image=output_image) โ
โ 402 โ โ if isinstance(outputs, list) and len(outputs) == 1 and isinstance(outputs[0], li โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/base.p โ
โ y:721 in __call__ โ
โ โ
โ 718 โ โ โ
โ 719 โ โ # By default, parse the response for the user. โ
โ 720 โ โ if output_image: โ
โ โฑ 721 โ โ โ return self.decode_image(response.content) โ
โ 722 โ โ else: โ
โ 723 โ โ โ return response.json() โ
โ 724 โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/base.p โ
โ y:698 in decode_image โ
โ โ
โ 695 โ โ โ
โ 696 โ โ from PIL import Image โ
โ 697 โ โ โ
โ โฑ 698 โ โ b64 = base64.b64decode(raw_image) โ
โ 699 โ โ _bytes = io.BytesIO(b64) โ
โ 700 โ โ return Image.open(_bytes) โ
โ 701 โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/base64.py:87 in b64decode โ
โ โ
โ 84 โ โ s = s.translate(bytes.maketrans(altchars, b'+/')) โ
โ 85 โ if validate and not re.fullmatch(b'[A-Za-z0-9+/]*={0,2}', s): โ
โ 86 โ โ raise binascii.Error('Non-base64 digit found') โ
โ โฑ 87 โ return binascii.a2b_base64(s) โ
โ 88 โ
โ 89 โ
โ 90 def standard_b64encode(s): โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Error: Incorrect padding
```
### Expected behavior
The image is decoded successfully
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23315/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23315/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23313
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23313/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23313/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23313/events
|
https://github.com/huggingface/transformers/pull/23313
| 1,706,514,167 |
PR_kwDOCUB6oc5QVE5G
| 23,313 |
[docs] Fix Agents and Tools docstring
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,683 | 1,683 | 1,683 |
MEMBER
| null |
Fixes the `kwarg` argument in the docstring to include what to expect, otherwise the `kwarg` gets mixed into the argument above it (see [here](https://huggingface.co/docs/transformers/main_classes/agent#transformers.Agent.chat.remote) for example).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23313/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23313/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23313",
"html_url": "https://github.com/huggingface/transformers/pull/23313",
"diff_url": "https://github.com/huggingface/transformers/pull/23313.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23313.patch",
"merged_at": 1683905354000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23312
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23312/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23312/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23312/events
|
https://github.com/huggingface/transformers/pull/23312
| 1,706,508,574 |
PR_kwDOCUB6oc5QVDsL
| 23,312 |
Fixed slow tokenizer behavior to make it remove special tokens when asked
|
{
"login": "pedrogengo",
"id": 27240528,
"node_id": "MDQ6VXNlcjI3MjQwNTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/27240528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pedrogengo",
"html_url": "https://github.com/pedrogengo",
"followers_url": "https://api.github.com/users/pedrogengo/followers",
"following_url": "https://api.github.com/users/pedrogengo/following{/other_user}",
"gists_url": "https://api.github.com/users/pedrogengo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pedrogengo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pedrogengo/subscriptions",
"organizations_url": "https://api.github.com/users/pedrogengo/orgs",
"repos_url": "https://api.github.com/users/pedrogengo/repos",
"events_url": "https://api.github.com/users/pedrogengo/events{/privacy}",
"received_events_url": "https://api.github.com/users/pedrogengo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23312). All of your documentation changes will be reflected on that endpoint.",
"cc @ArthurZucker ",
"I still need to do some more tests, but the idea is just add the special in the special tokens list. I will work more on it in this week :)",
"Hey! Thanks for working on this and good luck haha! Ping me if you need any help on fixing the tests. \r\nI think that the core bug is gonna be bit tricky to get right, but it should be fixed\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,683 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #23250
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23312/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23312/timeline
| null | true |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23312",
"html_url": "https://github.com/huggingface/transformers/pull/23312",
"diff_url": "https://github.com/huggingface/transformers/pull/23312.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23312.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23311
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23311/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23311/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23311/events
|
https://github.com/huggingface/transformers/issues/23311
| 1,706,415,789 |
I_kwDOCUB6oc5ltdat
| 23,311 |
TypeError: add got incompatible shapes for broadcasting: (512, 50, 1024), (1, 145, 1024).
|
{
"login": "alhuri",
"id": 46427957,
"node_id": "MDQ6VXNlcjQ2NDI3OTU3",
"avatar_url": "https://avatars.githubusercontent.com/u/46427957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alhuri",
"html_url": "https://github.com/alhuri",
"followers_url": "https://api.github.com/users/alhuri/followers",
"following_url": "https://api.github.com/users/alhuri/following{/other_user}",
"gists_url": "https://api.github.com/users/alhuri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alhuri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alhuri/subscriptions",
"organizations_url": "https://api.github.com/users/alhuri/orgs",
"repos_url": "https://api.github.com/users/alhuri/repos",
"events_url": "https://api.github.com/users/alhuri/events{/privacy}",
"received_events_url": "https://api.github.com/users/alhuri/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey @alhuri - as detailed in #22673 and #22780, the Italian CLIP repository is not maintained or affiliated with Hugging Face transformers. It is a standalone repository offering its own fine-tuning / evaluation scripts. As such, you're more likely to receive support regarding this issue by directly asking in the Italian CLIP repository: https://github.com/clip-italian/clip-italian/issues/new"
] | 1,683 | 1,684 | 1,684 |
NONE
| null |
### System Info
transformers version: 4.27.4
Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29
Python version: 3.8.10
Huggingface_hub version: 0.13.4
PyTorch version (GPU?): 1.9.0+cpu (False)
Tensorflow version (GPU?): 2.9.1 (True)
Flax version (CPU?/GPU?/TPU?): 0.6.8 (cpu)
Jax version: 0.4.8
JaxLib version: 0.4.7
Using GPU in script?:
Using distributed or parallel set-up in script?:
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am running the Imagenet evaluation script [here](https://github.com/clip-italian/clip-italian/blob/master/evaluation/CLIP_italian_ImageNet_Zero_Shot_Evaluation_.ipynb) to evaluate a version of clip that uses google/vit-large-patch32-384 provided [here](https://huggingface.co/LinaAlhuri/clip-vit-large-patch32). However, around the prediction cell bellow
```
top_ns = [1, 5, 10, 100]
acc_counters = [0. for _ in top_ns]
n = 0.
for i, (images, target) in enumerate(tqdm(loader)):
images = images
target = target.numpy()
# predict
image_features = image_model(images)
image_features = image_features / np.linalg.norm(image_features, axis=-1, keepdims=True)
logits = 100. * image_features @ zeroshot_weights
# measure accuracy
accs = accuracy(logits, target, topk=top_ns)
for j in range(len(top_ns)):
acc_counters[j] += accs[j]
n += images.shape[0]
tops = {f'top{top_ns[i]}': acc_counters[i] / n * 100 for i in range(len(top_ns))}
print(tops)
```
I am getting the below error
```
---------------------------------------------------------------------------
UnfilteredStackTrace Traceback (most recent call last)
in
8 # predict
----> 9 image_features = image_model(images)
10 image_features = image_features / np.linalg.norm(image_features, axis=-1, keepdims=True)
in (images)
24 language_model = lambda queries: np.asarray(model.get_text_features(*tokenize(queries)))
---> 25 image_model = lambda images: np.asarray(model.get_image_features(images.permute(0, 2, 3, 1).numpy(),))
[~/.local/lib/python3.8/site-packages/transformers/models/vision_text_dual_encoder/modeling_flax_vision_text_dual_encoder.py](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/96654/Downloads/~/.local/lib/python3.8/site-packages/transformers/models/vision_text_dual_encoder/modeling_flax_vision_text_dual_encoder.py) in get_image_features(self, pixel_values, params, dropout_rng, train)
405
--> 406 return self.module.apply(
407 {"params": params or self.params},
[~/.local/lib/python3.8/site-packages/jax/_src/traceback_util.py](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/96654/Downloads/~/.local/lib/python3.8/site-packages/jax/_src/traceback_util.py) in reraise_with_filtered_traceback(*args, **kwargs)
165 try:
--> 166 return fun(*args, **kwargs)
167 except Exception as e:
[~/.local/lib/python3.8/site-packages/flax/linen/module.py](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/96654/Downloads/~/.local/lib/python3.8/site-packages/flax/linen/module.py) in apply(self, variables, rngs, method, mutable, capture_intermediates, *args, **kwargs)
1484 method = _get_unbound_fn(method)
-> 1485 return apply(
1486 method, self,
...
---> 96 return lax_fn(x1, x2) if x1.dtype != np.bool_ else bool_lax_fn(x1, x2)
97 fn.__qualname__ = f"jax.numpy.{numpy_fn.__name__}"
98 fn = jit(fn, inline=True)
TypeError: add got incompatible shapes for broadcasting: (512, 50, 1024), (1, 145, 1024).
```
### Expected behavior
to run smoothly and provide accuracy results
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23311/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23311/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23310
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23310/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23310/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23310/events
|
https://github.com/huggingface/transformers/pull/23310
| 1,706,404,232 |
PR_kwDOCUB6oc5QUtW5
| 23,310 |
Fix test typos - audio feature extractors
|
{
"login": "LWprogramming",
"id": 13173037,
"node_id": "MDQ6VXNlcjEzMTczMDM3",
"avatar_url": "https://avatars.githubusercontent.com/u/13173037?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LWprogramming",
"html_url": "https://github.com/LWprogramming",
"followers_url": "https://api.github.com/users/LWprogramming/followers",
"following_url": "https://api.github.com/users/LWprogramming/following{/other_user}",
"gists_url": "https://api.github.com/users/LWprogramming/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LWprogramming/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LWprogramming/subscriptions",
"organizations_url": "https://api.github.com/users/LWprogramming/orgs",
"repos_url": "https://api.github.com/users/LWprogramming/repos",
"events_url": "https://api.github.com/users/LWprogramming/events{/privacy}",
"received_events_url": "https://api.github.com/users/LWprogramming/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,683 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes typos I discovered when I was writing [#23309](https://github.com/huggingface/transformers/pull/23309)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23310/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23310/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23310",
"html_url": "https://github.com/huggingface/transformers/pull/23310",
"diff_url": "https://github.com/huggingface/transformers/pull/23310.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23310.patch",
"merged_at": 1684167730000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23309
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23309/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23309/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23309/events
|
https://github.com/huggingface/transformers/pull/23309
| 1,706,400,343 |
PR_kwDOCUB6oc5QUsid
| 23,309 |
is_batched fix for remaining 2-D numpy arrays
|
{
"login": "LWprogramming",
"id": 13173037,
"node_id": "MDQ6VXNlcjEzMTczMDM3",
"avatar_url": "https://avatars.githubusercontent.com/u/13173037?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LWprogramming",
"html_url": "https://github.com/LWprogramming",
"followers_url": "https://api.github.com/users/LWprogramming/followers",
"following_url": "https://api.github.com/users/LWprogramming/following{/other_user}",
"gists_url": "https://api.github.com/users/LWprogramming/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LWprogramming/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LWprogramming/subscriptions",
"organizations_url": "https://api.github.com/users/LWprogramming/orgs",
"repos_url": "https://api.github.com/users/LWprogramming/repos",
"events_url": "https://api.github.com/users/LWprogramming/events{/privacy}",
"received_events_url": "https://api.github.com/users/LWprogramming/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hmm, this is odd-- I just noticed manually (locally) running `pytest` on the tests fixed here just shows they're all automatically skipped. Looking into configuration now...\r\n\r\nEDIT: oh, due to https://github.com/huggingface/transformers/issues/18355#issuecomment-1543277694 I hadn't had torch installed, torchaudio, a whole bunch of libraries. should work now",
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the quick changes @LWprogramming! Let's wait until #23223 is finalised before getting this merged so that we can update this PR with any amendments from there",
"> Thanks for the quick changes @LWprogramming! Let's wait until #23223 is finalised before getting this merged so that we can update this PR with any amendments from there\r\n\r\nOk, updated the code with comments from that PR, and ran tests + linters"
] | 1,683 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
Fix `is_batched` logic for 2-D numpy arrays, as described in https://github.com/huggingface/transformers/pull/23223#pullrequestreview-1423033751
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23309/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23309/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23309",
"html_url": "https://github.com/huggingface/transformers/pull/23309",
"diff_url": "https://github.com/huggingface/transformers/pull/23309.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23309.patch",
"merged_at": 1684867055000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23308
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23308/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23308/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23308/events
|
https://github.com/huggingface/transformers/pull/23308
| 1,706,393,692 |
PR_kwDOCUB6oc5QUrIV
| 23,308 |
Revert "search buffers for dtype"
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,683 | 1,683 | 1,683 |
COLLABORATOR
| null |
Reverts huggingface/transformers#23159
This breaks the FDSP integration for some reason, so reverting for now as we investigate things further. The revert will be included in the patch 4.29.1
(Test that breaks:
```
RUN_SLOW=yes pytest -s -v tests/fsdp -k test_checkpointing
```
in Accelerate)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23308/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23308/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23308",
"html_url": "https://github.com/huggingface/transformers/pull/23308",
"diff_url": "https://github.com/huggingface/transformers/pull/23308.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23308.patch",
"merged_at": 1683833519000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23307
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23307/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23307/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23307/events
|
https://github.com/huggingface/transformers/issues/23307
| 1,706,356,038 |
I_kwDOCUB6oc5ltO1G
| 23,307 |
compute_loss takes a lot of extra memory after saving checkpoint and causes OOM
|
{
"login": "HuiyingLi",
"id": 1331543,
"node_id": "MDQ6VXNlcjEzMzE1NDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1331543?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HuiyingLi",
"html_url": "https://github.com/HuiyingLi",
"followers_url": "https://api.github.com/users/HuiyingLi/followers",
"following_url": "https://api.github.com/users/HuiyingLi/following{/other_user}",
"gists_url": "https://api.github.com/users/HuiyingLi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HuiyingLi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HuiyingLi/subscriptions",
"organizations_url": "https://api.github.com/users/HuiyingLi/orgs",
"repos_url": "https://api.github.com/users/HuiyingLi/repos",
"events_url": "https://api.github.com/users/HuiyingLi/events{/privacy}",
"received_events_url": "https://api.github.com/users/HuiyingLi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"There is a memory spike due to the model being in 8 bits probably, cc @younesbelkada ",
"Thanks for the prompt response. Do you have any insights why would it only happen after saving checkpoint?",
"Hmm, I just figured that since this is using lora, there is no need to save checkpoints anyways? \r\n\r\n> There is a memory spike due to the model being in 8 bits probably, cc @younesbelkada\r\n\r\n",
"Hi @HuiyingLi \r\nMaybe the default saving mechanism is the culprit, to be on the safe zone I suggest to save the adapters only, for that you should use a custom callback to properly save the adapter weights\r\nPlease have a look at the suggested solution here: https://discuss.huggingface.co/t/peft-lora-gpt-neox-loraconfig/35790 and let us know how it goes",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,683 | 1,687 | 1,687 |
NONE
| null |
### System Info
- `transformers` version: 4.28.1
- Platform: Linux-5.15.0-1033-aws-x86_64-with-glibc2.27
- Python version: 3.10.9
- Huggingface_hub version: 0.14.0
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?:
<img width="639" alt="image" src="https://github.com/huggingface/transformers/assets/1331543/1ade210a-8dd9-4d6a-a70b-b3d2982e1914">
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Every time the trainer.py:_save() saves a checkpoint https://github.com/huggingface/transformers/blob/04ab5605fbb4ef207b10bf2772d88c53fc242e83/src/transformers/trainer.py#L2884
After saving the checkpoint and then resume training entering the training_step() function and executing compute_loss() https://github.com/huggingface/transformers/blob/04ab5605fbb4ef207b10bf2772d88c53fc242e83/src/transformers/trainer.py#L2731
There is a memory usage spike that causes OOM.
Without executing ` trainer.py:_save()` (a.k.a all previous normal training steps) however, the compute_loss() function does not allocate extra memory (except for the very first time of the forward backward pass happens which is expected). No memory increase is observed during `trainer.py:_save()` either. I have changed the save_steps to different numbers, the forward pass OOM is always triggered at the step right after saving checkpoint.
Minimal training script:
```
base_model_name="EleutherAI/pythia-6.9b"
model = transformers.AutoModelForCausalLM.from_pretrained(
base_model_name,
load_in_8bit=True,
torch_dtype=torch.float16,
device_map={'': 0}
)
tokenizer = transformers.AutoTokenizer.from_pretrained(
base_model_name,
device_map={'':0}
)
model = peft.prepare_model_for_int8_training(model)
model = peft.get_peft_model(model, peft.LoraConfig(
r=lora_r,
lora_alpha=lora_alpha,
target_modules=["query_key_value", "xxx"],
lora_dropout=lora_dropout,
bias="none",
task_type="CAUSAL_LM",
))
training_args = transformers.TrainingArguments(
per_device_train_batch_size=8,
gradient_accumulation_steps=gradient_accumulation_steps,
num_train_epochs=epochs,
learning_rate=learning_rate,
fp16=True,
logging_steps=20,
output_dir=output_dir,
save_steps=5, #for debugging purpose
)
trainer = transformers.Trainer(
model=model,
train_dataset=data,
args=training_args,
data_collator=transformers.DataCollatorForLanguageModeling(
tokenizer,
mlm=False,
),
)
model.config.use_cache = False
result = trainer.train(resume_from_checkpoint=False)
model.save_pretrained(output_dir)
```
### Expected behavior
I would expect saving action to not change the behavior of forward pass. I am wondering why there is the memory spike and whether it can be solved.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23307/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23307/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23306
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23306/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23306/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23306/events
|
https://github.com/huggingface/transformers/pull/23306
| 1,706,338,201 |
PR_kwDOCUB6oc5QUe9c
| 23,306 |
Fix image segmentation tool test
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23306). All of your documentation changes will be reflected on that endpoint."
] | 1,683 | 1,683 | 1,683 |
COLLABORATOR
| null |
# What does this PR do?
There were some `prompt` left from before the rename.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23306/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23306/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23306",
"html_url": "https://github.com/huggingface/transformers/pull/23306",
"diff_url": "https://github.com/huggingface/transformers/pull/23306.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23306.patch",
"merged_at": 1683830292000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23305
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23305/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23305/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23305/events
|
https://github.com/huggingface/transformers/pull/23305
| 1,706,320,614 |
PR_kwDOCUB6oc5QUbBh
| 23,305 |
Fix typo in gradio-tools docs
|
{
"login": "freddyaboulton",
"id": 41651716,
"node_id": "MDQ6VXNlcjQxNjUxNzE2",
"avatar_url": "https://avatars.githubusercontent.com/u/41651716?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/freddyaboulton",
"html_url": "https://github.com/freddyaboulton",
"followers_url": "https://api.github.com/users/freddyaboulton/followers",
"following_url": "https://api.github.com/users/freddyaboulton/following{/other_user}",
"gists_url": "https://api.github.com/users/freddyaboulton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/freddyaboulton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/freddyaboulton/subscriptions",
"organizations_url": "https://api.github.com/users/freddyaboulton/orgs",
"repos_url": "https://api.github.com/users/freddyaboulton/repos",
"events_url": "https://api.github.com/users/freddyaboulton/events{/privacy}",
"received_events_url": "https://api.github.com/users/freddyaboulton/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23305). All of your documentation changes will be reflected on that endpoint."
] | 1,683 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes a typo in the gradio-tools guide, `tool` vs `tools`, that prevents the code snippet from running.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23305/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23305/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23305",
"html_url": "https://github.com/huggingface/transformers/pull/23305",
"diff_url": "https://github.com/huggingface/transformers/pull/23305.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23305.patch",
"merged_at": 1683829888000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23304
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23304/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23304/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23304/events
|
https://github.com/huggingface/transformers/issues/23304
| 1,706,311,937 |
I_kwDOCUB6oc5ltEEB
| 23,304 |
Cannot decode image from remote image segmentation tool
|
{
"login": "freddyaboulton",
"id": 41651716,
"node_id": "MDQ6VXNlcjQxNjUxNzE2",
"avatar_url": "https://avatars.githubusercontent.com/u/41651716?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/freddyaboulton",
"html_url": "https://github.com/freddyaboulton",
"followers_url": "https://api.github.com/users/freddyaboulton/followers",
"following_url": "https://api.github.com/users/freddyaboulton/following{/other_user}",
"gists_url": "https://api.github.com/users/freddyaboulton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/freddyaboulton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/freddyaboulton/subscriptions",
"organizations_url": "https://api.github.com/users/freddyaboulton/orgs",
"repos_url": "https://api.github.com/users/freddyaboulton/repos",
"events_url": "https://api.github.com/users/freddyaboulton/events{/privacy}",
"received_events_url": "https://api.github.com/users/freddyaboulton/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The endpoint seems to have trouble indeed. It might need a restart @LysandreJik. Form what I see it complains about the inputs being named `image` and `label` and expects a `prompt`.",
"Thank you, nice catch @freddyaboulton! \r\n\r\nIt should be working now.",
"Thank you for the very speedy fix @LysandreJik ! "
] | 1,683 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.29.0
- Platform: macOS-12.2.1-arm64-arm-64bit
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@sgugger @LysandreJik
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the following script. The problem goes away when running the local tool.
```python
from transformers import HfAgent
from diffusers.utils import load_image
bunny_img = load_image("https://gradio-builds.s3.amazonaws.com/sample-images/SpaceBunny.png")
agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder")
segmented_img = agent.run("Please locate the bunny in the image", image=bunny_img, remote=True)
```
```
in <module>:8 โ
โ โ
โ 5 โ
โ 6 agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder") โ
โ 7 โ
โ โฑ 8 segmented_img = agent.run("Please locate the bunny in the image", image=bunny_img, remot โ
โ 9 โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/agents โ
โ .py:323 in run โ
โ โ
โ 320 โ โ if not return_code: โ
โ 321 โ โ โ print("\n\n==Result==") โ
โ 322 โ โ โ self.cached_tools = resolve_tools(code, self.toolbox, remote=remote, cached_ โ
โ โฑ 323 โ โ โ return evaluate(code, self.cached_tools, state=kwargs.copy()) โ
โ 324 โ โ else: โ
โ 325 โ โ โ tool_code = get_tool_creation_code(code, self.toolbox, remote=remote) โ
โ 326 โ โ โ return f"{tool_code}\n{code}" โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/python โ
โ _interpreter.py:61 in evaluate โ
โ โ
โ 58 โ result = None โ
โ 59 โ for idx, node in enumerate(expression.body): โ
โ 60 โ โ try: โ
โ โฑ 61 โ โ โ line_result = evaluate_ast(node, state, tools) โ
โ 62 โ โ except InterpretorError as e: โ
โ 63 โ โ โ msg = f"Evaluation of the code stopped at line {idx} before the end because โ
โ 64 โ โ โ if chat_mode: โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/python โ
โ _interpreter.py:98 in evaluate_ast โ
โ โ
โ 95 โ if isinstance(expression, ast.Assign): โ
โ 96 โ โ # Assignement -> we evaluate the assignement which should update the state โ
โ 97 โ โ # We return the variable assigned as it may be used to determine the final resul โ
โ โฑ 98 โ โ return evaluate_assign(expression, state, tools) โ
โ 99 โ elif isinstance(expression, ast.Call): โ
โ 100 โ โ # Function call -> we return the value of the function call โ
โ 101 โ โ return evaluate_call(expression, state, tools) โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/python โ
โ _interpreter.py:139 in evaluate_assign โ
โ โ
โ 136 โ
โ 137 def evaluate_assign(assign, state, tools): โ
โ 138 โ var_names = assign.targets โ
โ โฑ 139 โ result = evaluate_ast(assign.value, state, tools) โ
โ 140 โ โ
โ 141 โ if len(var_names) == 1: โ
โ 142 โ โ state[var_names[0].id] = result โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/python โ
โ _interpreter.py:101 in evaluate_ast โ
โ โ
โ 98 โ โ return evaluate_assign(expression, state, tools) โ
โ 99 โ elif isinstance(expression, ast.Call): โ
โ 100 โ โ # Function call -> we return the value of the function call โ
โ โฑ 101 โ โ return evaluate_call(expression, state, tools) โ
โ 102 โ elif isinstance(expression, ast.Constant): โ
โ 103 โ โ # Constant -> just return the value โ
โ 104 โ โ return expression.value โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/python โ
โ _interpreter.py:167 in evaluate_call โ
โ โ
โ 164 โ # Todo deal with args โ
โ 165 โ args = [evaluate_ast(arg, state, tools) for arg in call.args] โ
โ 166 โ kwargs = {keyword.arg: evaluate_ast(keyword.value, state, tools) for keyword in call โ
โ โฑ 167 โ return func(*args, **kwargs) โ
โ 168 โ
โ 169 โ
โ 170 def evaluate_subscript(subscript, state, tools): โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/base.p โ
โ y:399 in __call__ โ
โ โ
โ 396 โ โ output_image = self.tool_class is not None and self.tool_class.outputs == ["imag โ
โ 397 โ โ inputs = self.prepare_inputs(*args, **kwargs) โ
โ 398 โ โ if isinstance(inputs, dict): โ
โ โฑ 399 โ โ โ outputs = self.client(**inputs, output_image=output_image) โ
โ 400 โ โ else: โ
โ 401 โ โ โ outputs = self.client(inputs, output_image=output_image) โ
โ 402 โ โ if isinstance(outputs, list) and len(outputs) == 1 and isinstance(outputs[0], li โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/base.p โ
โ y:721 in __call__ โ
โ โ
โ 718 โ โ โ
โ 719 โ โ # By default, parse the response for the user. โ
โ 720 โ โ if output_image: โ
โ โฑ 721 โ โ โ return self.decode_image(response.content) โ
โ 722 โ โ else: โ
โ 723 โ โ โ return response.json() โ
โ 724 โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/base.p โ
โ y:698 in decode_image โ
โ โ
โ 695 โ โ โ
โ 696 โ โ from PIL import Image โ
โ 697 โ โ โ
โ โฑ 698 โ โ b64 = base64.b64decode(raw_image) โ
โ 699 โ โ _bytes = io.BytesIO(b64) โ
โ 700 โ โ return Image.open(_bytes) โ
โ 701 โ
โ โ
โ /Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/base64.py:87 in b64decode โ
โ โ
โ 84 โ โ s = s.translate(bytes.maketrans(altchars, b'+/')) โ
โ 85 โ if validate and not re.fullmatch(b'[A-Za-z0-9+/]*={0,2}', s): โ
โ 86 โ โ raise binascii.Error('Non-base64 digit found') โ
โ โฑ 87 โ return binascii.a2b_base64(s) โ
โ 88 โ
โ 89 โ
โ 90 def standard_b64encode(s): โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Error: Incorrect padding
```
### Expected behavior
The image is able to be correctly decoded
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23304/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23304/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23303
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23303/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23303/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23303/events
|
https://github.com/huggingface/transformers/issues/23303
| 1,706,203,364 |
I_kwDOCUB6oc5lspjk
| 23,303 |
Cannot import Tool if old version of huggingface_hub is installed
|
{
"login": "freddyaboulton",
"id": 41651716,
"node_id": "MDQ6VXNlcjQxNjUxNzE2",
"avatar_url": "https://avatars.githubusercontent.com/u/41651716?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/freddyaboulton",
"html_url": "https://github.com/freddyaboulton",
"followers_url": "https://api.github.com/users/freddyaboulton/followers",
"following_url": "https://api.github.com/users/freddyaboulton/following{/other_user}",
"gists_url": "https://api.github.com/users/freddyaboulton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/freddyaboulton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/freddyaboulton/subscriptions",
"organizations_url": "https://api.github.com/users/freddyaboulton/orgs",
"repos_url": "https://api.github.com/users/freddyaboulton/repos",
"events_url": "https://api.github.com/users/freddyaboulton/events{/privacy}",
"received_events_url": "https://api.github.com/users/freddyaboulton/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Thanks for reporting! This will be addressed in #23301 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,683 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.29.0
- Platform: macOS-12.2.1-arm64-arm-64bit
- Python version: 3.9.16
- Huggingface_hub version: 0.13.4
- Safetensors version: not installed
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger @LysandreJik
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Have huggingface_hub 0.13.4 installed
2. Import Tool class
```python
from transformers import Tool
```
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
File ~/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/utils/import_utils.py:1172, in _LazyModule._get_module(self, module_name)
1171 try:
-> 1172 return importlib.import_module("." + module_name, self.__name__)
1173 except Exception as e:
File ~/miniconda3/envs/gradio-tools/lib/python3.9/importlib/__init__.py:127, in import_module(name, package)
126 level += 1
--> 127 return _bootstrap._gcd_import(name[level:], package, level)
File <frozen importlib._bootstrap>:1030, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:1007, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:986, in _find_and_load_unlocked(name, import_)
File <frozen importlib._bootstrap>:680, in _load_unlocked(spec)
File <frozen importlib._bootstrap_external>:850, in exec_module(self, module)
File <frozen importlib._bootstrap>:228, in _call_with_frames_removed(f, *args, **kwds)
File ~/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/transformers/tools/base.py:27
26 from huggingface_hub import CommitOperationAdd, HfFolder, create_commit, create_repo, hf_hub_download, metadata_update
---> 27 from huggingface_hub.utils import RepositoryNotFoundError, get_session
29 from ..dynamic_module_utils import custom_object_save, get_class_from_dynamic_module, get_imports
ImportError: cannot import name 'get_session' from 'huggingface_hub.utils' (/Users/freddy/miniconda3/envs/gradio-tools/lib/python3.9/site-packages/huggingface_hub/utils/__init__.py)
```
### Expected behavior
Importing tool class does not raise an error.
Upgrading to hugginggface_hub version 0.14.1 fixes the issue!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23303/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23303/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23302
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23302/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23302/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23302/events
|
https://github.com/huggingface/transformers/pull/23302
| 1,706,122,866 |
PR_kwDOCUB6oc5QTwhG
| 23,302 |
skip `test_run_squad_no_trainer` for now
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,683 | 1,683 | 1,683 |
COLLABORATOR
| null |
# What does this PR do?
Skip `test_run_squad_no_trainer` for now, as it is failing on `main`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23302/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23302/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23302",
"html_url": "https://github.com/huggingface/transformers/pull/23302",
"diff_url": "https://github.com/huggingface/transformers/pull/23302.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23302.patch",
"merged_at": 1683826009000
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.