url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
β | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
β | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/22994
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22994/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22994/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22994/events
|
https://github.com/huggingface/transformers/issues/22994
| 1,683,599,806 |
I_kwDOCUB6oc5kWbG-
| 22,994 |
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
|
{
"login": "oroojlooy",
"id": 20797260,
"node_id": "MDQ6VXNlcjIwNzk3MjYw",
"avatar_url": "https://avatars.githubusercontent.com/u/20797260?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oroojlooy",
"html_url": "https://github.com/oroojlooy",
"followers_url": "https://api.github.com/users/oroojlooy/followers",
"following_url": "https://api.github.com/users/oroojlooy/following{/other_user}",
"gists_url": "https://api.github.com/users/oroojlooy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oroojlooy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oroojlooy/subscriptions",
"organizations_url": "https://api.github.com/users/oroojlooy/orgs",
"repos_url": "https://api.github.com/users/oroojlooy/repos",
"events_url": "https://api.github.com/users/oroojlooy/events{/privacy}",
"received_events_url": "https://api.github.com/users/oroojlooy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false | null |
[] |
[
"I cannot transfer the issue to the `trl` repo but it should be opened there since the bug is in their example.",
"@sgugger I already have posted it there, and it seems that the issue is not on TRL side. ",
"`torch.autograd.set_detect_anomaly(True)` reports that the root of issue might be in line 201 in `site-packages/transformers/models/gpt2/modeling_gpt2.py`\r\n\r\n<img width=\"941\" alt=\"image\" src=\"https://user-images.githubusercontent.com/20797260/234368588-cdd90db1-7ddd-4087-a7c5-296fd36d6019.png\">\r\n",
"Turned out that modifying line 201 as below solves the issue. \r\n`attn_weights = torch.where(causal_mask.clone(), attn_weights.to(attn_weights.dtype).clone(), mask_value)` \r\nRemember that it was:\r\n`attn_weights = torch.where(causal_mask, attn_weights.to(attn_weights.dtype), mask_value)` \r\n\r\n@sgugger Do you know if it is a safe modification? \r\n",
"This will break the flow of the gradients from the attention weights, so no it's a good fix.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Any update on this? I am having the same issue",
"I'm experiencing same issue with `WhisperModel`",
"Actually according to `torch`, the `clone()` operation is not breaking the flow of the gradient. see [here](https://pytorch.org/docs/stable/generated/torch.clone.html):\r\n> This function is differentiable, so gradients will flow back from the result of this operation to input. To create a tensor without an autograd relationship to input see [detach()](https://pytorch.org/docs/stable/generated/torch.Tensor.detach.html#torch.Tensor.detach). \r\n\r\nApparently, previous torch version did not check for these, but gradients were wrong (source is a lost stack overflow thread), there are at least 5 more issues linked to this one: #25130, #22225, #15677, #14179, #24996, #23087. Now wether this was fixed in the latest versions of torch or not is also a question, but all these issues use FSDP. \r\n\r\nEvery inplace operation seems to be causing this. But we have a lot of these π cc @pacman100 wondering what you would recommend? Should we make everything compatible removing inplace operations? Seems kind of impractible. \r\n\r\nThis wrapper : https://github.com/pytorch/pytorch/blob/main/torch/autograd/graph.py#L508 seems to add `clone()` wherever its needed. Might be something to do there? \r\n\r\nWe should also PIN the issue to redirect everyone that has FSDP + inplace operation issue. ",
"Also removing all inplace operations might make the memory used a bit higher, so would love if there was an alternative solution for FSDP/",
"I'm hitting the same issue, while trying to get the gpt2 embeddings of target via the following call:\r\n```\r\nself.gpt2.transformer.wte(target)\r\n```\r\nError message:\r\n```\r\nRuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation:\r\n```\r\n\r\nHowever, I did a trick like below, it succeeded.\r\n```\r\nself.gpt2.transformer.wte(target.clone())\r\n```\r\nBTW, gpt2 model is set on evaluation mode. `self.gpt2.eval()`",
"Hello, \r\n\r\n> cc @pacman100 wondering what you would recommend? Should we make everything compatible removing inplace operations? Seems kind of impractible\r\n\r\nI don't have any recommendations at present other than replacing in place operations. Let me try this example once to see if this persists with the latest PyTorch version.",
"Will mark as WIP as this is not something we are working on ",
"The error is triggered by DDP buffer broadcasting mechanism.\r\nWe need to set `broadcast_buffers=False` to avoid it.\r\n```python\r\nmodel = torch.nn.parallel.DistributedDataParallel(model, broadcast_buffers=False, ...)\r\n```\r\n\r\n\r\n"
] | 1,682 | 1,704 | null |
NONE
| null |
### System Info
transformers 4.28.1
torch 2.0.0
torchaudio 2.0.0
torchvision 0.15.0
huggingface-hub 0.13.4
trl 0.4.2.dev0
### Who can help?
Probably people from accelerate, trainer, and text:
@pacman100, @sgugger, @ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Install the TRL package from (https://github.com/lvwerra/trl)
2. Clone the package and go to `trl/examples/summarization/scripts`
3. Setup `accelerate config` like this
```
compute_environment: LOCAL_MACHINE
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch_policy: BACKWARD_PRE
fsdp_offload_params: false
fsdp_sharding_strategy: 1
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_transformer_layer_cls_to_wrap: GPT2Block
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
4. call `accelerate launch reward_summarization.py`
This results in the following error:
```
/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/torch/autograd/__init__.py:200: UserWarning: Error detected in WhereBackward0. Traceback of forward call that caused the error:
File "reward_summarization.py", line 203, in <module>
trainer.train(script_args.resume_from_checkpoint)
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/transformers/trainer.py", line 1662, in train
return inner_training_loop(
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/transformers/trainer.py", line 1929, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/transformers/trainer.py", line 2699, in training_step
loss = self.compute_loss(model, inputs)
File "reward_summarization.py", line 185, in compute_loss
rewards_j = model(input_ids=inputs["input_ids_j"], attention_mask=inputs["attention_mask_j"])[0]
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1156, in forward
output = self._run_ddp_forward(*inputs, **kwargs)
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1110, in _run_ddp_forward
return module_to_run(*inputs[0], **kwargs[0]) # type: ignore[index]
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1420, in forward
transformer_outputs = self.transformer(
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 899, in forward
outputs = block(
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 389, in forward
attn_outputs = self.attn(
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 330, in forward
attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask)
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 201, in _attn
attn_weights = torch.where(causal_mask, attn_weights.to(attn_weights.dtype), mask_value)
(Triggered internally at /opt/conda/conda-bld/pytorch_1678402379298/work/torch/csrc/autograd/python_anomaly_mode.cpp:114.)
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
Traceback (most recent call last):
File "reward_summarization.py", line 203, in <module>
trainer.train(script_args.resume_from_checkpoint)
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/transformers/trainer.py", line 1662, in train
return inner_training_loop(
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/transformers/trainer.py", line 1929, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/transformers/trainer.py", line 2717, in training_step
loss.backward()
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/torch/_tensor.py", line 487, in backward
torch.autograd.backward(
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/torch/autograd/__init__.py", line 200, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [CUDABoolType [1, 1, 385, 385]] is at version 3; expected version 2 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
```
### Expected behavior
I expect it should run fine, but it ends in that error. Although it is not a native huggingFace code, it seems that it the issue is from the gpt2 trainer code.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22994/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22994/timeline
|
reopened
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22993
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22993/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22993/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22993/events
|
https://github.com/huggingface/transformers/issues/22993
| 1,683,582,306 |
I_kwDOCUB6oc5kWW1i
| 22,993 |
Using data collator in `Pipeline`
|
{
"login": "neilkimn",
"id": 37108154,
"node_id": "MDQ6VXNlcjM3MTA4MTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/37108154?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neilkimn",
"html_url": "https://github.com/neilkimn",
"followers_url": "https://api.github.com/users/neilkimn/followers",
"following_url": "https://api.github.com/users/neilkimn/following{/other_user}",
"gists_url": "https://api.github.com/users/neilkimn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neilkimn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neilkimn/subscriptions",
"organizations_url": "https://api.github.com/users/neilkimn/orgs",
"repos_url": "https://api.github.com/users/neilkimn/repos",
"events_url": "https://api.github.com/users/neilkimn/events{/privacy}",
"received_events_url": "https://api.github.com/users/neilkimn/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I'm not too sure where the question is here. Each `Pipeline` has the pre/post-processing logic they need implemented in their `preprocess` and `postprocess` methods."
] | 1,682 | 1,683 | 1,683 |
NONE
| null |
Hello,
I am in the process of moving a bunch of pre- and post-processing logic to use the [Pipeline](https://huggingface.co/docs/transformers/main_classes/pipelines). In my original code I would use a data collator in my `Trainer` constructor to take care of padding inputs among other things. The `Trainer` then takes care of collating data for both training and evaluation.
I could move the logic within the collator into the processing of the pipeline, but I want to keep the code as similar as possible when using the `Trainer` for training specifically, and when I use the pipeline during inference or evaluation.
What could be the best way to go about this? In the more general case I could just scrap the pipeline and opt for a torch dataloader and run evaluation with that, but I am interested in keeping the pipeline around as I am inheriting some logic for aggregation around. I also think the ability to encapsulate pre- and post-processing in the pipeline is useful.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22993/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22993/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22992
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22992/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22992/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22992/events
|
https://github.com/huggingface/transformers/issues/22992
| 1,683,535,863 |
I_kwDOCUB6oc5kWLf3
| 22,992 |
Weird behavior for initial tokens in BERT Base Cased
|
{
"login": "mawilson1234",
"id": 5022150,
"node_id": "MDQ6VXNlcjUwMjIxNTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5022150?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mawilson1234",
"html_url": "https://github.com/mawilson1234",
"followers_url": "https://api.github.com/users/mawilson1234/followers",
"following_url": "https://api.github.com/users/mawilson1234/following{/other_user}",
"gists_url": "https://api.github.com/users/mawilson1234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mawilson1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mawilson1234/subscriptions",
"organizations_url": "https://api.github.com/users/mawilson1234/orgs",
"repos_url": "https://api.github.com/users/mawilson1234/repos",
"events_url": "https://api.github.com/users/mawilson1234/events{/privacy}",
"received_events_url": "https://api.github.com/users/mawilson1234/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey! Thanks for reporting, I believe that this is somewhat expected, as the `mask-fill` pipeline does not exactly use just `argmax`. There is a bit more process involved in how to obtain the correct output. This is normal! π€ ",
"I understand why the `fill-mask` output differs from the output when using `argmax` now, but is it still expected that it predicts `.` instead of `The` when using `argmax`?",
"No I believe that the most important is that it correctly predicts the masked word, which the loss will be computed on. Other tokens are ignored",
"Got it, thanks!"
] | 1,682 | 1,685 | 1,685 |
NONE
| null |
### System Info
transformers version: 4.27.4
python version: 3.8.8
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm running a simple MLM task using BERT Base Cased. I'm noticing weird behavior when decoding the first token (after the CLS token) in the output. Here's an example:
```python
from transformers import AutoModelForMaskedLM, AutoTokenizer
import torch
model = AutoModelForMaskedLM.from_pretrained('bert-base-cased')
tokenizer = AutoTokenizer.from_pretrained('bert-base-cased')
inputs = tokenizer(['The laws have done [MASK] harm.'], return_tensors='pt')
with torch.no_grad():
outputs = model(**inputs)
tokenizer.batch_decode(torch.argmax(outputs.logits, dim=-1))
```
This produces the output: `.. laws have done no harm..`. I know the first and last dots correspond to predictions for the CLS and EOS tokens, so they should be ignored, but the second dot is where `The` should be. This happens with a variety of words in many sentences, but it doesn't always happen for the same words. It does seem to be paying attention to this initial word even when it is not produced, since the results differ depending on the initial word, even if it's not decoded from the output. But it looks weird. Is this normal behavior?
When I use the fill-mask pipeline, I get a different result, but I'm assuming that the pipeline just internally uses string replacement for the mask token rather than actually decoding the full output.
```python
from transformers import pipeline
pipe = pipeline('fill-mask', 'bert-base-cased')
pipe('The laws have done [MASK] harm.')[0]['sequence']
```
Produces `The laws have done no harm.`, as expected.
### Expected behavior
I'd expect that given tokens would be retained as is, for the most part. Sentence initial `The` and `I` seem to cause this problem a lot, which is odd, given I'd expect those to be well-attested in the training data.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22992/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22992/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22991
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22991/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22991/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22991/events
|
https://github.com/huggingface/transformers/pull/22991
| 1,683,318,502 |
PR_kwDOCUB6oc5PHH8G
| 22,991 |
π [i18n-KO] Translated `model_sharing.mdx` to Korean
|
{
"login": "0525hhgus",
"id": 47289574,
"node_id": "MDQ6VXNlcjQ3Mjg5NTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/47289574?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/0525hhgus",
"html_url": "https://github.com/0525hhgus",
"followers_url": "https://api.github.com/users/0525hhgus/followers",
"following_url": "https://api.github.com/users/0525hhgus/following{/other_user}",
"gists_url": "https://api.github.com/users/0525hhgus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/0525hhgus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/0525hhgus/subscriptions",
"organizations_url": "https://api.github.com/users/0525hhgus/orgs",
"repos_url": "https://api.github.com/users/0525hhgus/repos",
"events_url": "https://api.github.com/users/0525hhgus/events{/privacy}",
"received_events_url": "https://api.github.com/users/0525hhgus/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> In general, additional proofreading is necessary. It is recommended to carefully review the machine-translations and revise any sections that appear unclear or inaccurate.\r\n\r\nThank you for the advice. I will proceed with more detailed proofreading in machine translation!",
"May you please review this PR? π\r\n@sgugger, @ArthurZucker, @eunseojo"
] | 1,682 | 1,691 | 1,682 |
CONTRIBUTOR
| null |
<!-- PRμ μ λͺ©μ "π [i18n-KO] Translated `<your_file>.mdx` to Korean" μΌλ‘ λΆνλ립λλ€ -->
# What does this PR do?
Translated the `model_sharing.mdx` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
<!-- λ©μΈ μ΄μμ κΈ°λ‘μ΄ λ¨μμ! κ°μ§μ°κ΅¬μ 리ν¬λ₯Ό μ¬μ©ν΄ μ°μ΅νμ€λλ μ κ±°ν΄μ£Όμλ©΄ κ°μ¬νκ² μ΅λλ€! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (λ²μ λλ½/μ€λ³΅ κ²μ¬)
- [x] Grammar Check (λ§μΆ€λ² κ²μ¬)
- [x] Review or Add new terms to glossary (μ©μ΄ νμΈ λ° μΆκ°)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-previewλ‘ μ μμλ νμΈ)
## Who can review? (Initial)
<!-- 1. μ 체ν¬κ° λͺ¨λ μλ£λ λ€μλ§ κ°μ§μ°κ΅¬μ νμλ€μκ² λ¦¬λ·° μμ²νλ μλ μ£Όμμ λ
ΈμΆν΄μ£ΌμΈμ! -->
Team PseudoLab, may you please review this PR?
@0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. κ°μ§μ°κ΅¬μ νμλ€κ³Ό λ¦¬λ·°κ° λλ νμλ§ νκΉ
νμ΄μ€ μ§μλ€μκ² λ¦¬λ·° μμ²νλ μλ μ£Όμμ λ
ΈμΆν΄μ£ΌμΈμ! -->
@sgugger, @ArthurZucker, @eunseojo May you please review this PR?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22991/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22991/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22991",
"html_url": "https://github.com/huggingface/transformers/pull/22991",
"diff_url": "https://github.com/huggingface/transformers/pull/22991.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22991.patch",
"merged_at": 1682688033000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22990
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22990/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22990/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22990/events
|
https://github.com/huggingface/transformers/pull/22990
| 1,683,194,772 |
PR_kwDOCUB6oc5PGtKc
| 22,990 |
Fix None value when adding info to auto_map
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"There are some, it's juss the case when there is only a fast tokenizer that is custom that is not tested."
] | 1,682 | 1,682 | 1,682 |
COLLABORATOR
| null |
# What does this PR do?
This should fix the issue encountered in #22983: before testing if `--` in in a value of the auto map, we need to make sure it's not `None`.
Fixes #22983
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22990/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22990/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22990",
"html_url": "https://github.com/huggingface/transformers/pull/22990",
"diff_url": "https://github.com/huggingface/transformers/pull/22990.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22990.patch",
"merged_at": 1682534376000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22989
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22989/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22989/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22989/events
|
https://github.com/huggingface/transformers/pull/22989
| 1,683,188,209 |
PR_kwDOCUB6oc5PGrvd
| 22,989 |
fix bug auto loading llamatokenizer
|
{
"login": "vxfla",
"id": 43680053,
"node_id": "MDQ6VXNlcjQzNjgwMDUz",
"avatar_url": "https://avatars.githubusercontent.com/u/43680053?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vxfla",
"html_url": "https://github.com/vxfla",
"followers_url": "https://api.github.com/users/vxfla/followers",
"following_url": "https://api.github.com/users/vxfla/following{/other_user}",
"gists_url": "https://api.github.com/users/vxfla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vxfla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vxfla/subscriptions",
"organizations_url": "https://api.github.com/users/vxfla/orgs",
"repos_url": "https://api.github.com/users/vxfla/repos",
"events_url": "https://api.github.com/users/vxfla/events{/privacy}",
"received_events_url": "https://api.github.com/users/vxfla/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Lol no, but nice try. Maybe decapoda-research/llama-7b-hf should merge one of the multiple PRs they received that fixes the tokenizer on their side.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22989). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,685 | 1,685 |
NONE
| null |
# What does this PR do?
Huggingface decapoda-research/llama-7b-hf config decide the name of tokenizer LLaMATokenizer, while in transformers it is LlamaTokenizer.
Unify the name as LLaMATokenizer, so that we can use AutoTokenizer to load llama tokenizer.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22989/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22989/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22989",
"html_url": "https://github.com/huggingface/transformers/pull/22989",
"diff_url": "https://github.com/huggingface/transformers/pull/22989.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22989.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22988
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22988/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22988/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22988/events
|
https://github.com/huggingface/transformers/pull/22988
| 1,683,108,239 |
PR_kwDOCUB6oc5PGaY2
| 22,988 |
[`DocTest`] Fix correct checkpoint
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
Related failing test: https://github.com/huggingface/transformers/actions/runs/4793034296/jobs/8525118203
Sets the correct (and lighter) checkpoint name in the docstring
cc @amyeroberts @ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22988/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22988/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22988",
"html_url": "https://github.com/huggingface/transformers/pull/22988",
"diff_url": "https://github.com/huggingface/transformers/pull/22988.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22988.patch",
"merged_at": 1682428836000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22987
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22987/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22987/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22987/events
|
https://github.com/huggingface/transformers/pull/22987
| 1,682,999,394 |
PR_kwDOCUB6oc5PGC2a
| 22,987 |
[Doctests] Refactor doctests + add CI
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Local tests run, I now have this strange error: \r\n```python \r\n_____ ERROR collecting src/transformers/models/whisper/modeling_whisper.py _____\r\nimport file mismatch:\r\nimported module 'transformers.models.whisper.modeling_whisper' has this __file__ attribute:\r\n /home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/transformers/models/whisper/modeling_whisper.py\r\nwhich is not the same as the test file we want to collect:\r\n /home/circleci/transformers/src/transformers/models/whisper/modeling_whisper.py\r\nHINT: remove __pycache__ / .pyc files and/or use a unique basename for your test file modules\r\n```",
"@ArthurZucker Regarding the error you mentioned in the above comment, could you provide the command you used to launch?\r\n\r\nAlso, for this PR to be merged, two thing we should check are:\r\n- for a single modeling file, how long it will take to run the doctest against it on CircleCI, and if it will fit in the available memory.\r\n - (we should probably run the doctest against a few existing models)\r\n- There should NOT have multiple modeling files being included in `test_to_run` for doctest.\r\n - This PR currently checks if a file is in `utils/documentation_tests.txt`, but that file doesn't contain all existing modeling files if I remember correctly.",
"Regarding \r\n\r\n> There should NOT have multiple modeling files being included in test_to_run for doctest.\r\nThis PR currently checks if a file is in utils/documentation_tests.txt, but that file doesn't contain all existing modeling files if I remember correctly.\r\n\r\nActually the CI checks doc for all files in the diff that end in .py and .mdx. This is prone to changes! Fully open to recommandations.\r\n\r\nFor slow test, I skip all codeblocks that includ \"cuda\" in it, we can refine the filter. \r\n\r\n",
"CUDA tests are properly skipped! : \r\n```python \r\n(11 durations < 0.005s hidden. Use -vv to show these durations.)\r\n=================================================================================================================================================================== short test summary info ====================================================================================================================================================================\r\nPASSED docs/source/en/testing.mdx::testing.mdx\r\nPASSED docs/source/en/testing.mdx::testing.mdx\r\nPASSED src/transformers/models/whisper/modeling_whisper.py::transformers.models.whisper.modeling_whisper.WhisperForAudioClassification.forward\r\nPASSED src/transformers/models/whisper/modeling_whisper.py::transformers.models.whisper.modeling_whisper.WhisperForAudioClassification.forward\r\nPASSED src/transformers/models/whisper/modeling_whisper.py::transformers.models.whisper.modeling_whisper.WhisperForAudioClassification.forward\r\nPASSED src/transformers/models/whisper/modeling_whisper.py::transformers.models.whisper.modeling_whisper.WhisperForConditionalGeneration.forward\r\nPASSED src/transformers/models/whisper/modeling_whisper.py::transformers.models.whisper.modeling_whisper.WhisperForConditionalGeneration.forward\r\nPASSED src/transformers/models/whisper/modeling_whisper.py::transformers.models.whisper.modeling_whisper.WhisperForConditionalGeneration.forward\r\nPASSED src/transformers/models/whisper/modeling_whisper.py::transformers.models.whisper.modeling_whisper.WhisperModel.forward\r\nPASSED src/transformers/models/whisper/modeling_whisper.py::transformers.models.whisper.modeling_whisper.WhisperModel.forward\r\nPASSED src/transformers/models/whisper/modeling_whisper.py::transformers.models.whisper.modeling_whisper.WhisperModel.forward\r\nPASSED docs/source/en/model_doc/wav2vec2.mdx::wav2vec2.mdx\r\nPASSED docs/source/en/model_doc/wav2vec2.mdx::wav2vec2.mdx\r\nSKIPPED [1] <doctest testing.mdx[0]>:1: Codeblock line xxx in xxxx has cuda involved. Skipping\r\nSKIPPED [1] <doctest wav2vec2.mdx[0]>:1: Codeblock line xxx in xxxx has cuda involved. Skipping\r\n\r\nResults (19.65s):\r\n 3 passed\r\n 2 skipped\r\n(py39) arthur_huggingface_co@arthur-gpu-2:~/transformers$ \r\n```",
"Warning will be imporved",
"Looked this PR and played a bit with it: so far so good π \r\n\r\nOne thing I found:\r\n```\r\nSKIP_CUDA_DOCTEST=1 pytest -v --doctest-modules --doctest-glob=\"*.mdx\" docs/source/en/model_doc/longt5.mdx\r\n```\r\nThe doctest is running while I assume it will be skipped as it has `cuda` thing.",
"What should be detailed is that only the codeblocks (and not the entire file) should be skipped. This might be why longt5 is not skipped! \r\nIβll be off for a while, I leave this in your hands! π€π€",
"For info: I will take over this PR to try to merge it earlier.",
"Convert to draft for now, as more changes to deal with `cuda` is required.",
"cc @amy @sgugger \r\n\r\nI think the PR is ready (You can see the changes I made [here](https://github.com/huggingface/transformers/pull/22987/files/a5a337d856cdd20c8acac8de04823ae24469f6e1..531fc2673e081655a1fff70a25b9e60c30c0dad8)), but a few points need to be considered before I merge it.\r\n\r\n\r\n\r\n- no model/dataset being cached as we have done for our daily CI (with our own GCP firestore): so in each PR CI run, they will always be re-downloaded.\r\n- timeout will give a status code `124` and the job will be failed (`red`). I am not sure this is really what we want to see on PRs.\r\n - ~probably there is some hacky way to avoid this. Not sure.~\r\n- We haven't checked all current files will pass the doctesting. For example, only a subset of modeling files and doc files are tested in our daiily doctest run.\r\n - I assume we don't want to see surprising failed doctest on PR CI. Any suggestion? Like a list of (temporarily) ignored files, or try to run all the files and fix all the failure (..?) before this PR being merged?",
"All very good questions!\r\n\r\n> no model/dataset being cached as we have done for our daily CI (with our own GCP firestore): so in each PR CI run, they will always be re-downloaded.\r\n\r\nThat's okay since the test should only be triggered when someone modifies a guide/docstring. This is not in every PR.\r\n\r\n> timeout will give a status code 124 and the job will be failed (red). I am not sure this is really what we want to see on PRs.\r\n\r\nIndeed, if you can filter that one out to be green instead, it would be better\r\n\r\n> We haven't checked all current files will pass the doctesting.\r\n\r\nThe files that are tested on the CI should be present in the list of tests for doctests (that list will be remvoed one day when we get to 100% coverage but we're not there yet).",
"@sgugger Let me know if [the changes](https://github.com/huggingface/transformers/pull/22987/files/7978f9ac0fa915a1abe074be68d660f44b8c1349..844b46df871c2db717083fe9be9c44b0c78e9866) to address the above comments if fine.\r\n\r\nOne example run is [here](https://app.circleci.com/pipelines/github/huggingface/transformers/63781/workflows/7430b1eb-3613-41b1-b548-a54366b02dae/jobs/787437)\r\n\r\n\r\n<img width=\"290\" alt=\"Screenshot 2023-05-05 120741\" src=\"https://user-images.githubusercontent.com/2521628/236430742-c5104bb4-49af-4bb9-a3db-f32e484af914.png\">\r\n"
] | 1,682 | 1,683 | 1,683 |
COLLABORATOR
| null |
# What does this PR do?
Wow! The fix is crazy simple but I broke my head finding the correct way to include this in the cleanest way.
We are keeping `pytest --doctest-module` π₯³ Basically it just came down to:
- change the `doctest.DocTestParser()`'s default regex compilation
- rewrite `_pytest.doctest` utilities that are private to use this parser!
TODOS:
- [x] change parser
- [x] add CI Job
- [x] Filter jobs that can't run on CUDA! This is pretty important
- [ ] update doc
- [x] fiind a way to default `--doctest-glob` but that's a small nit
- [ ] add test, to test the parser mostly
- [ ] add check file is doctested if it has some doctring
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22987/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22987/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22987",
"html_url": "https://github.com/huggingface/transformers/pull/22987",
"diff_url": "https://github.com/huggingface/transformers/pull/22987.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22987.patch",
"merged_at": 1683657289000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22986
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22986/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22986/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22986/events
|
https://github.com/huggingface/transformers/issues/22986
| 1,682,953,073 |
I_kwDOCUB6oc5kT9Nx
| 22,986 |
Failed to import due to invalid escape sequence '\d' (modeling_utils.py, line 1825)
|
{
"login": "helpmefindaname",
"id": 26192135,
"node_id": "MDQ6VXNlcjI2MTkyMTM1",
"avatar_url": "https://avatars.githubusercontent.com/u/26192135?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/helpmefindaname",
"html_url": "https://github.com/helpmefindaname",
"followers_url": "https://api.github.com/users/helpmefindaname/followers",
"following_url": "https://api.github.com/users/helpmefindaname/following{/other_user}",
"gists_url": "https://api.github.com/users/helpmefindaname/gists{/gist_id}",
"starred_url": "https://api.github.com/users/helpmefindaname/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/helpmefindaname/subscriptions",
"organizations_url": "https://api.github.com/users/helpmefindaname/orgs",
"repos_url": "https://api.github.com/users/helpmefindaname/repos",
"events_url": "https://api.github.com/users/helpmefindaname/events{/privacy}",
"received_events_url": "https://api.github.com/users/helpmefindaname/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Solved in https://github.com/huggingface/transformers/pull/22936"
] | 1,682 | 1,682 | 1,682 |
NONE
| null |
### System Info
- `transformers` version: 4.28.1
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.10.10
- Huggingface_hub version: 0.13.4
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I did not manage to recreate the bug consistently, however I had it appear on both windows and linux and on python3.8 and python3.10.
I lack understanding, why it sometimes works and sometimes throws a syntax error. From my python-understanding, this should fail on all python environments with all transformers versions above `4.27.0`.
in python environment, run
`from transformers import AlbertModel` # or any other model
will **sometimes** lead to an error:
```
module = <module 'transformers' from '.../.venv/lib/python3.10/site-packages/transformers/__init__.py'>
fromlist = ('AlbertModel', 'AlbertTokenizer', 'BertModel', 'BertTokenizer', 'CamembertModel', 'CamembertTokenizer', ...)
import_ = <built-in function __import__>
> ???
<frozen importlib._bootstrap>:1075:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <module 'transformers' from '.../.venv/lib/python3.10/site-packages/transformers/__init__.py'>
name = 'AlbertModel'
def __getattr__(self, name: str) -> Any:
if name in self._objects:
return self._objects[name]
if name in self._modules:
value = self._get_module(name)
elif name in self._class_to_module.keys():
module = self._get_module(self._class_to_module[name])
> value = getattr(module, name)
.venv/lib/python3.10/site-packages/transformers/utils/import_utils.py:1137:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <module 'transformers.models.albert' from '.../.venv/lib/python3.10/site-packages/transformers/models/albert/__init__.py'>
name = 'AlbertModel'
def __getattr__(self, name: str) -> Any:
if name in self._objects:
return self._objects[name]
if name in self._modules:
value = self._get_module(name)
elif name in self._class_to_module.keys():
> module = self._get_module(self._class_to_module[name])
.venv/lib/python3.10/site-packages/transformers/utils/import_utils.py:1136:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <module 'transformers.models.albert' from '.../.venv/lib/python3.10/site-packages/transformers/models/albert/__init__.py'>
module_name = 'modeling_albert'
def _get_module(self, module_name: str):
try:
return importlib.import_module("." + module_name, self.__name__)
except Exception as e:
> raise RuntimeError(
f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its"
f" traceback):\n{e}"
) from e
E RuntimeError: Failed to import transformers.models.albert.modeling_albert because of the following error (look up to see its traceback):
E invalid escape sequence '\d' (modeling_utils.py, line 1825)
.venv/lib/python3.10/site-packages/transformers/utils/import_utils.py:1148: RuntimeError
```
The error ocours due to https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L1832 using an invalid escape sequence and therefore not being right python syntax.
The fix would be to use `reg = re.compile(r"(.*?)-\d{5}-of-\d{5}")` instead to disable escape sequences and write a plain regex instead.
### Expected behavior
I would expect to be able to import transformer models without it sometimes trowing a RuntimeError.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22986/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22986/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22985
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22985/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22985/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22985/events
|
https://github.com/huggingface/transformers/issues/22985
| 1,682,917,936 |
I_kwDOCUB6oc5kT0ow
| 22,985 |
[i18n-<languageCode>] Translating docs to <languageName>
|
{
"login": "FEXPLORE",
"id": 131221716,
"node_id": "U_kgDOB9JI1A",
"avatar_url": "https://avatars.githubusercontent.com/u/131221716?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FEXPLORE",
"html_url": "https://github.com/FEXPLORE",
"followers_url": "https://api.github.com/users/FEXPLORE/followers",
"following_url": "https://api.github.com/users/FEXPLORE/following{/other_user}",
"gists_url": "https://api.github.com/users/FEXPLORE/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FEXPLORE/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FEXPLORE/subscriptions",
"organizations_url": "https://api.github.com/users/FEXPLORE/orgs",
"repos_url": "https://api.github.com/users/FEXPLORE/repos",
"events_url": "https://api.github.com/users/FEXPLORE/events{/privacy}",
"received_events_url": "https://api.github.com/users/FEXPLORE/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
closed
| false | null |
[] |
[] | 1,682 | 1,682 | 1,682 |
NONE
| null |
<!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community π (currently 0 out of 267 complete)
Who would want to translate? Please follow the π€ [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers π€).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review.
* π If you'd like others to help you with the translation, you can also post in the π€ [forums](https://discuss.huggingface.co/).
## Get Started section
- [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180
- [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through)
- [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx).
## Tutorial section
- [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx)
- [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx)
- [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx)
- [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx)
- [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx)
- [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx)
- [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx)
<!--
Keep on adding more as you go π₯
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22985/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22985/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22984
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22984/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22984/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22984/events
|
https://github.com/huggingface/transformers/pull/22984
| 1,682,884,177 |
PR_kwDOCUB6oc5PFqQ-
| 22,984 |
[`SAM`]Β Add sam doc
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Failing tests seems to be unrelated, merging!"
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
As suggested by @LysandreJik offline, this PR adds a nice docstring for `SamModel` showing users how to leverage Auto API to run SAM
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22984/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22984",
"html_url": "https://github.com/huggingface/transformers/pull/22984",
"diff_url": "https://github.com/huggingface/transformers/pull/22984.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22984.patch",
"merged_at": 1682424028000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22983
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22983/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22983/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22983/events
|
https://github.com/huggingface/transformers/issues/22983
| 1,682,677,832 |
I_kwDOCUB6oc5kS6BI
| 22,983 |
Using `auto_map` in `tokenizer_config.json` gives `TypeError: argument of type 'NoneType' is not iterable`
|
{
"login": "larrylawl",
"id": 40198156,
"node_id": "MDQ6VXNlcjQwMTk4MTU2",
"avatar_url": "https://avatars.githubusercontent.com/u/40198156?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/larrylawl",
"html_url": "https://github.com/larrylawl",
"followers_url": "https://api.github.com/users/larrylawl/followers",
"following_url": "https://api.github.com/users/larrylawl/following{/other_user}",
"gists_url": "https://api.github.com/users/larrylawl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/larrylawl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/larrylawl/subscriptions",
"organizations_url": "https://api.github.com/users/larrylawl/orgs",
"repos_url": "https://api.github.com/users/larrylawl/repos",
"events_url": "https://api.github.com/users/larrylawl/events{/privacy}",
"received_events_url": "https://api.github.com/users/larrylawl/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @sgugger seems like #22814 added \r\n```python \r\n if \"auto_map\" in init_kwargs and not _is_local:\r\n # For backward compatibility with odl format.\r\n if isinstance(init_kwargs[\"auto_map\"], (tuple, list)):\r\n init_kwargs[\"auto_map\"] = {\"AutoTokenizer\": init_kwargs[\"auto_map\"]}\r\n init_kwargs[\"auto_map\"] = add_model_info_to_auto_map(\r\n init_kwargs[\"auto_map\"], pretrained_model_name_or_path\r\n )\r\n```\r\nI can take this on but you are more familiar with the changes\r\n",
"Thanks for flagging! The PR linked above should fix this."
] | 1,682 | 1,682 | 1,682 |
NONE
| null |
### System Info
certifi==2022.12.7
charset-normalizer==3.1.0
cmake==3.26.3
filelock==3.12.0
fsspec==2023.4.0
huggingface-hub==0.14.0
idna==3.4
Jinja2==3.1.2
lit==16.0.2
MarkupSafe==2.1.2
mpmath==1.3.0
networkx==3.1
numpy==1.24.3
nvidia-cublas-cu11==11.10.3.66
nvidia-cuda-cupti-cu11==11.7.101
nvidia-cuda-nvrtc-cu11==11.7.99
nvidia-cuda-runtime-cu11==11.7.99
nvidia-cudnn-cu11==8.5.0.96
nvidia-cufft-cu11==10.9.0.58
nvidia-curand-cu11==10.2.10.91
nvidia-cusolver-cu11==11.4.0.1
nvidia-cusparse-cu11==11.7.4.91
nvidia-nccl-cu11==2.14.3
nvidia-nvtx-cu11==11.7.91
packaging==23.1
PyYAML==6.0
regex==2023.3.23
requests==2.28.2
sentencepiece==0.1.98
sympy==1.11.1
tokenizers==0.13.3
torch==2.0.0
tqdm==4.65.0
-e git+https://github.com/huggingface/transformers.git@073baf7f2289dbbf99e29f375e40c3e270ba6e85#egg=transformers
triton==2.0.0
typing-extensions==4.5.0
urllib3==1.26.15
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Running the following...
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("THUDM/glm-10b-chinese", trust_remote_code=True)
```
Gave the error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jovyan/transformers/src/transformers/models/auto/tokenization_auto.py", line 692, in from_pretrained
return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/home/jovyan/transformers/src/transformers/tokenization_utils_base.py", line 1812, in from_pretrained
return cls._from_pretrained(
File "/home/jovyan/transformers/src/transformers/tokenization_utils_base.py", line 1878, in _from_pretrained
init_kwargs["auto_map"] = add_model_info_to_auto_map(
File "/home/jovyan/transformers/src/transformers/utils/generic.py", line 563, in add_model_info_to_auto_map
auto_map[key] = [f"{repo_id}--{v}" if "--" not in v else v for v in value]
File "/home/jovyan/transformers/src/transformers/utils/generic.py", line 563, in <listcomp>
auto_map[key] = [f"{repo_id}--{v}" if "--" not in v else v for v in value]
TypeError: argument of type 'NoneType' is not iterable
```
### Expected behavior
Load tokenizer without errors.
## Analysis
- I suspect it has to do with `auto_map` in `tokenizer_config.json` [here](https://huggingface.co/THUDM/glm-10b-chinese/blob/main/tokenizer_config.json)
- The tokenizer loads fine with transformers version 4.27.0
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22983/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22983/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22982
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22982/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22982/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22982/events
|
https://github.com/huggingface/transformers/pull/22982
| 1,682,668,195 |
PR_kwDOCUB6oc5PE7-L
| 22,982 |
fixed small typo in code example
|
{
"login": "jvanmelckebeke",
"id": 13679272,
"node_id": "MDQ6VXNlcjEzNjc5Mjcy",
"avatar_url": "https://avatars.githubusercontent.com/u/13679272?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jvanmelckebeke",
"html_url": "https://github.com/jvanmelckebeke",
"followers_url": "https://api.github.com/users/jvanmelckebeke/followers",
"following_url": "https://api.github.com/users/jvanmelckebeke/following{/other_user}",
"gists_url": "https://api.github.com/users/jvanmelckebeke/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jvanmelckebeke/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jvanmelckebeke/subscriptions",
"organizations_url": "https://api.github.com/users/jvanmelckebeke/orgs",
"repos_url": "https://api.github.com/users/jvanmelckebeke/repos",
"events_url": "https://api.github.com/users/jvanmelckebeke/events{/privacy}",
"received_events_url": "https://api.github.com/users/jvanmelckebeke/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes a small typo in a code example in the single GPU inference docs. Renamed the `text` variable to `prompt` as `prompt` is used in the line below as parameter for the tokenizer.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Documentation: @sgugger , @stevhliu and @MKhalusova
(sorry if tagging is too much for just this tiny repo)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22982/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22982/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22982",
"html_url": "https://github.com/huggingface/transformers/pull/22982",
"diff_url": "https://github.com/huggingface/transformers/pull/22982.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22982.patch",
"merged_at": 1682427382000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22981
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22981/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22981/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22981/events
|
https://github.com/huggingface/transformers/issues/22981
| 1,682,656,913 |
I_kwDOCUB6oc5kS06R
| 22,981 |
Cannot train language-modeling using Luke model
|
{
"login": "doherty88",
"id": 4629146,
"node_id": "MDQ6VXNlcjQ2MjkxNDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4629146?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/doherty88",
"html_url": "https://github.com/doherty88",
"followers_url": "https://api.github.com/users/doherty88/followers",
"following_url": "https://api.github.com/users/doherty88/following{/other_user}",
"gists_url": "https://api.github.com/users/doherty88/gists{/gist_id}",
"starred_url": "https://api.github.com/users/doherty88/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/doherty88/subscriptions",
"organizations_url": "https://api.github.com/users/doherty88/orgs",
"repos_url": "https://api.github.com/users/doherty88/repos",
"events_url": "https://api.github.com/users/doherty88/events{/privacy}",
"received_events_url": "https://api.github.com/users/doherty88/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"It looks like the Luke model is not compatible out of the box with those examples since the person who contributed it decided to use -1 as an index in the cross-entropy loss instead of -100 that we use everywhere else.\r\n\r\nMight be worth fixing though it's a breaking change @amyeroberts @ArthurZucker what do you think?\r\n\r\nIn the meantime, a workaround is to replace the -100 used for padding labels in the example by -1 to use it with Luke.",
"@sgugger Yes, I'd agree, I think it's better to update to be in line with the rest of the library. ",
"> ntime, a workaround is to replace the -10\r\n\r\nThanks @sgugger for the information, however, I am new to NLP. could you please tell me where should I change to use this workaround?"
] | 1,682 | 1,682 | 1,682 |
NONE
| null |
### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.29.0.dev0
- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.10
- Python version: 3.8.13
- Huggingface_hub version: 0.14.0
- Safetensors version: not installed
- PyTorch version (GPU?): 1.12.0a0+bd13bc6 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@Sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I want to try to fine-tuning Luke model via run_mlm.py in example folder. I use the standard script in examples, then I use following code to start train:
`
pip install git+https://github.com/huggingface/transformers
python /gxtq-ner-ws/run_mlm.py \
--output_dir=/gxtq-ner-ws/luke_large_6_pretrained_v2/ \
--model_type=luke \
--model_name_or_path=studio-ousia/luke-large-lite \
--do_train \
--per_device_train_batch_size 16 \
--num_train_epochs 6 \
--train_file=/gxtq-ner-ws/lm_training_data_v2.txt \
--save_total_limit 1 \
--save_steps 10000 \
`
Then I got following error :
[INFO|trainer.py:1776] 2023-04-25 06:57:12,367 >> Number of trainable parameters = 147,342,943
0%| | 0/5814 [00:00<?, ?it/s]Traceback (most recent call last):
File "./run_language_modeling_v4.py", line 657, in <module>
main()
File "./run_language_modeling_v4.py", line 606, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1662, in train
return inner_training_loop(
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1930, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2718, in training_step
loss.backward()
File "/opt/conda/lib/python3.8/site-packages/torch/_tensor.py", line 399, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/opt/conda/lib/python3.8/site-packages/torch/autograd/__init__.py", line 173, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [3,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [4,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [7,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [8,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [9,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [10,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [11,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [12,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [13,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [14,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [15,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [16,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [17,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [18,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [19,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [21,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [22,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [23,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [24,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [26,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [28,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [29,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [30,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [31,0,0] Assertion `t >= 0 && t < n_classes` failed.
0%| | 0/5814 [00:00<?, ?it/s]
I also tried to run it in cpu env. here is the error :
Traceback (most recent call last):
File "./run_language_modeling_v4.py", line 657, in <module>
main()
File "./run_language_modeling_v4.py", line 606, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1662, in train
return inner_training_loop(
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1930, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2700, in training_step
loss = self.compute_loss(model, inputs)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2732, in compute_loss
outputs = model(**inputs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1111, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/models/luke/modeling_luke.py", line 1375, in forward
mlm_loss = self.loss_fn(logits.view(-1, self.config.vocab_size), labels.view(-1))
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1111, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/loss.py", line 1163, in forward
return F.cross_entropy(input, target, weight=self.weight,
File "/opt/conda/lib/python3.8/site-packages/torch/nn/functional.py", line 2961, in cross_entropy
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
IndexError: Target -100 is out of bounds.
### Expected behavior
train model as expected.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22981/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22981/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22980
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22980/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22980/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22980/events
|
https://github.com/huggingface/transformers/issues/22980
| 1,682,586,586 |
I_kwDOCUB6oc5kSjva
| 22,980 |
Trainer failing during _save_checkpoint "cannot pickle '_thread.lock' object" with skip_memory_metrics=True
|
{
"login": "galenballew",
"id": 7023349,
"node_id": "MDQ6VXNlcjcwMjMzNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7023349?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/galenballew",
"html_url": "https://github.com/galenballew",
"followers_url": "https://api.github.com/users/galenballew/followers",
"following_url": "https://api.github.com/users/galenballew/following{/other_user}",
"gists_url": "https://api.github.com/users/galenballew/gists{/gist_id}",
"starred_url": "https://api.github.com/users/galenballew/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/galenballew/subscriptions",
"organizations_url": "https://api.github.com/users/galenballew/orgs",
"repos_url": "https://api.github.com/users/galenballew/repos",
"events_url": "https://api.github.com/users/galenballew/events{/privacy}",
"received_events_url": "https://api.github.com/users/galenballew/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I also ran this with `no_cuda=True` and received the same error. ",
"Your code example doesn't define multiple objects, so I can't really tell what's wrong. Please give us a minimal reproducer we can execute.",
"Sorry about that--I've put everything into this repo if that is easier: https://github.com/galenballew/bert-multiclass\r\nI'll also repeat it here too: \r\n\r\n\r\n```python\r\n# Dependencies\r\nimport matplotlib.pyplot as plt\r\nfrom sklearn.metrics import accuracy_score\r\nfrom torch.utils.data import DataLoader\r\nfrom transformers import DistilBertTokenizerFast, DistilBertForSequenceClassification, Trainer, TrainingArguments, AdamW\r\nfrom tqdm import tqdm\r\nimport torch\r\nimport tools\r\n\r\nuse_cuda = torch.cuda.is_available()\r\ndevice = torch.device(\"cuda:0\" if use_cuda else \"cpu\")\r\n\r\ntrain_texts, train_labels = tools.read_data(\"train\")\r\nval_texts, val_labels = tools.read_data(\"val\")\r\ntest_texts, test_labels = tools.read_data(\"test\")\r\ntrain_texts = train_texts.tolist()\r\nval_texts = val_texts.tolist()\r\ntest_texts = test_texts.tolist()\r\n\r\n# Create integer class labels instead of strings\r\nclasses = tools.labels(train_labels).tolist()\r\ntrain_labels = tools.relabel(train_labels, classes)\r\nval_labels = tools.relabel(val_labels, classes)\r\ntest_labels = tools.relabel(test_labels, classes)\r\n\r\nclass IntentDataset(torch.utils.data.Dataset):\r\n def __init__(self, encodings, labels):\r\n self.encodings = encodings\r\n self.labels = labels\r\n\r\n\r\n def __getitem__(self, idx):\r\n \"\"\"\r\n To support the indexing such that dataset[i] can be used to get the i-th sample\r\n \"\"\"\r\n# item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}\r\n item = {key: val[idx].clone().detach() for key, val in self.encodings.items()}\r\n item['label'] = torch.tensor(self.labels[idx])\r\n return item\r\n\r\n\r\n def __len__(self):\r\n \"\"\"\r\n Returns the size of the dataset.\r\n \"\"\"\r\n return len(self.labels)\r\n\r\ndef compute_metrics(eval_pred):\r\n accuracy = load(\"accuracy\")\r\n precision = load(\"precision\")\r\n f1 = load(\"f1\")\r\n recall = load(\"recall\")\r\n \r\n predictions, labels = eval_pred\r\n predictions = np.argmax(predictions, axis=1)\r\n \r\n accuracy.compute(predictions=predictions, references=labels)\r\n precision.compute(predictions=predictions, references=labels, average=\"micro\")\r\n f1.compute(predictions=predictions, references=labels, average=\"micro\")\r\n recall.compute(predictions=predictions, references=labels, average=\"micro\")\r\n \r\n return {\"accuracy\": accuracy, \"precision\": precision, \"f1\": f1, \"recall\": recall}\r\n\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-uncased\")\r\n\r\ntrain_encodings = tokenizer(train_texts, padding=True, truncation=True, return_tensors=\"pt\")\r\nval_encodings = tokenizer(val_texts, padding=True, truncation=True, return_tensors=\"pt\")\r\ntest_encodings = tokenizer(test_texts, padding=True, truncation=True, return_tensors=\"pt\")\r\n\r\n# Turn the encodings and labels to a dataset object\r\ntrain_dataset = IntentDataset(train_encodings, train_labels)\r\nval_dataset = IntentDataset(val_encodings, val_labels)\r\ntest_dataset = IntentDataset(test_encodings, test_labels)\r\n\r\n\r\nmodel = AutoModelForSequenceClassification.from_pretrained(\"bert-base-uncased\", num_labels=len(classes)).to('cuda') \r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\"./results\",\r\n overwrite_output_dir=True,\r\n learning_rate=2e-5,\r\n per_device_train_batch_size=16,\r\n per_device_eval_batch_size=16,\r\n num_train_epochs=2,\r\n optim=\"adamw_torch\",\r\n weight_decay=0.01,\r\n evaluation_strategy=\"epoch\",\r\n save_strategy=\"epoch\",\r\n load_best_model_at_end=True,\r\n no_cuda=False,\r\n skip_memory_metrics=True\r\n)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n compute_metrics=compute_metrics,\r\n train_dataset=train_dataset,\r\n eval_dataset=val_dataset,\r\n)\r\n\r\ntrainer.train()\r\n```",
"Could you also print us `trainer.state`? The error comes from the fact it is not JSON-serializable so it would help to know which object in it is not serializable. Thanks!",
"`trainer.state` directly after instantiation: \r\n```\r\nTrainerState(epoch=None, global_step=0, max_steps=0, num_train_epochs=0, total_flos=0, log_history=[], best_metric=None, best_model_checkpoint=None, is_local_process_zero=True, is_world_process_zero=True, is_hyper_param_search=False, trial_name=None, trial_params=None)\r\n```\r\n\r\nAdded this and am including entire output, not just the state. Either the behavior changed or adding try/except is causing a slightly different output: \r\n```\r\ntry:\r\n trainer.train()\r\nexcept:\r\n print(\"\\n\\n\")\r\n print(\"********************\")\r\n print(\"\\n\\n\")\r\n print(trainer.state)\r\n print(\"\\n\\n\")\r\n print(\"********************\")\r\n print(\"\\n\\n\")\r\n```\r\n\r\n```\r\nTrainer is attempting to log a value of \"EvaluationModule(name: \"accuracy\", module_type: \"metric\", features: {'predictions': Value(dtype='int32', id=None), 'references': Value(dtype='int32', id=None)}, usage: \"\"\"\r\nArgs:\r\n predictions (`list` of `int`): Predicted labels.\r\n references (`list` of `int`): Ground truth labels.\r\n normalize (`boolean`): If set to False, returns the number of correctly classified samples. Otherwise, returns the fraction of correctly classified samples. Defaults to True.\r\n sample_weight (`list` of `float`): Sample weights Defaults to None.\r\n\r\nReturns:\r\n accuracy (`float` or `int`): Accuracy score. Minimum possible value is 0. Maximum possible value is 1.0, or the number of examples input, if `normalize` is set to `True`.. A higher score means higher accuracy.\r\n\r\nExamples:\r\n\r\n Example 1-A simple example\r\n >>> accuracy_metric = evaluate.load(\"accuracy\")\r\n >>> results = accuracy_metric.compute(references=[0, 1, 2, 0, 1, 2], predictions=[0, 1, 1, 2, 1, 0])\r\n >>> print(results)\r\n {'accuracy': 0.5}\r\n\r\n Example 2-The same as Example 1, except with `normalize` set to `False`.\r\n >>> accuracy_metric = evaluate.load(\"accuracy\")\r\n >>> results = accuracy_metric.compute(references=[0, 1, 2, 0, 1, 2], predictions=[0, 1, 1, 2, 1, 0], normalize=False)\r\n >>> print(results)\r\n {'accuracy': 3.0}\r\n\r\n Example 3-The same as Example 1, except with `sample_weight` set.\r\n >>> accuracy_metric = evaluate.load(\"accuracy\")\r\n >>> results = accuracy_metric.compute(references=[0, 1, 2, 0, 1, 2], predictions=[0, 1, 1, 2, 1, 0], sample_weight=[0.5, 2, 0.7, 0.5, 9, 0.4])\r\n >>> print(results)\r\n {'accuracy': 0.8778625954198473}\r\n\"\"\", stored examples: 0)\" of type <class 'evaluate_modules.metrics.evaluate-metric--accuracy.f887c0aab52c2d38e1f8a215681126379eca617f96c447638f751434e8e65b14.accuracy.Accuracy'> for key \"eval/accuracy\" as a scalar. This invocation of Tensorboard's writer.add_scalar() is incorrect so we dropped this attribute.\r\nTrainer is attempting to log a value of \"EvaluationModule(name: \"precision\", module_type: \"metric\", features: {'predictions': Value(dtype='int32', id=None), 'references': Value(dtype='int32', id=None)}, usage: \"\"\"\r\nArgs:\r\n predictions (`list` of `int`): Predicted class labels.\r\n references (`list` of `int`): Actual class labels.\r\n labels (`list` of `int`): The set of labels to include when `average` is not set to `'binary'`. If `average` is `None`, it should be the label order. Labels present in the data can be excluded, for example to calculate a multiclass average ignoring a majority negative class. Labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in `predictions` and `references` are used in sorted order. Defaults to None.\r\n pos_label (`int`): The class to be considered the positive class, in the case where `average` is set to `binary`. Defaults to 1.\r\n average (`string`): This parameter is required for multiclass/multilabel targets. If set to `None`, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data. Defaults to `'binary'`.\r\n\r\n - 'binary': Only report results for the class specified by `pos_label`. This is applicable only if the classes found in `predictions` and `references` are binary.\r\n - 'micro': Calculate metrics globally by counting the total true positives, false negatives and false positives.\r\n - 'macro': Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.\r\n - 'weighted': Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters `'macro'` to account for label imbalance. This option can result in an F-score that is not between precision and recall.\r\n - 'samples': Calculate metrics for each instance, and find their average (only meaningful for multilabel classification).\r\n sample_weight (`list` of `float`): Sample weights Defaults to None.\r\n zero_division (`int` or `string`): Sets the value to return when there is a zero division. Defaults to 'warn'.\r\n\r\n - 0: Returns 0 when there is a zero division.\r\n - 1: Returns 1 when there is a zero division.\r\n - 'warn': Raises warnings and then returns 0 when there is a zero division.\r\n\r\nReturns:\r\n precision (`float` or `array` of `float`): Precision score or list of precision scores, depending on the value passed to `average`. Minimum possible value is 0. Maximum possible value is 1. Higher values indicate that fewer negative examples were incorrectly labeled as positive, which means that, generally, higher scores are better.\r\n\r\nExamples:\r\n\r\n Example 1-A simple binary example\r\n >>> precision_metric = evaluate.load(\"precision\")\r\n >>> results = precision_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0])\r\n >>> print(results)\r\n {'precision': 0.5}\r\n\r\n Example 2-The same simple binary example as in Example 1, but with `pos_label` set to `0`.\r\n >>> precision_metric = evaluate.load(\"precision\")\r\n >>> results = precision_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], pos_label=0)\r\n >>> print(round(results['precision'], 2))\r\n 0.67\r\n\r\n Example 3-The same simple binary example as in Example 1, but with `sample_weight` included.\r\n >>> precision_metric = evaluate.load(\"precision\")\r\n >>> results = precision_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], sample_weight=[0.9, 0.5, 3.9, 1.2, 0.3])\r\n >>> print(results)\r\n {'precision': 0.23529411764705882}\r\n\r\n Example 4-A multiclass example, with different values for the `average` input.\r\n >>> predictions = [0, 2, 1, 0, 0, 1]\r\n >>> references = [0, 1, 2, 0, 1, 2]\r\n >>> results = precision_metric.compute(predictions=predictions, references=references, average='macro')\r\n >>> print(results)\r\n {'precision': 0.2222222222222222}\r\n >>> results = precision_metric.compute(predictions=predictions, references=references, average='micro')\r\n >>> print(results)\r\n {'precision': 0.3333333333333333}\r\n >>> results = precision_metric.compute(predictions=predictions, references=references, average='weighted')\r\n >>> print(results)\r\n {'precision': 0.2222222222222222}\r\n >>> results = precision_metric.compute(predictions=predictions, references=references, average=None)\r\n >>> print([round(res, 2) for res in results['precision']])\r\n [0.67, 0.0, 0.0]\r\n\"\"\", stored examples: 0)\" of type <class 'evaluate_modules.metrics.evaluate-metric--precision.4e7f439a346715f68500ce6f2be82bf3272abd3f20bdafd203a2c4f85b61dd5f.precision.Precision'> for key \"eval/precision\" as a scalar. This invocation of Tensorboard's writer.add_scalar() is incorrect so we dropped this attribute.\r\nTrainer is attempting to log a value of \"EvaluationModule(name: \"f1\", module_type: \"metric\", features: {'predictions': Value(dtype='int32', id=None), 'references': Value(dtype='int32', id=None)}, usage: \"\"\"\r\nArgs:\r\n predictions (`list` of `int`): Predicted labels.\r\n references (`list` of `int`): Ground truth labels.\r\n labels (`list` of `int`): The set of labels to include when `average` is not set to `'binary'`, and the order of the labels if `average` is `None`. Labels present in the data can be excluded, for example to calculate a multiclass average ignoring a majority negative class. Labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in `predictions` and `references` are used in sorted order. Defaults to None.\r\n pos_label (`int`): The class to be considered the positive class, in the case where `average` is set to `binary`. Defaults to 1.\r\n average (`string`): This parameter is required for multiclass/multilabel targets. If set to `None`, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data. Defaults to `'binary'`.\r\n\r\n - 'binary': Only report results for the class specified by `pos_label`. This is applicable only if the classes found in `predictions` and `references` are binary.\r\n - 'micro': Calculate metrics globally by counting the total true positives, false negatives and false positives.\r\n - 'macro': Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.\r\n - 'weighted': Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters `'macro'` to account for label imbalance. This option can result in an F-score that is not between precision and recall.\r\n - 'samples': Calculate metrics for each instance, and find their average (only meaningful for multilabel classification).\r\n sample_weight (`list` of `float`): Sample weights Defaults to None.\r\n\r\nReturns:\r\n f1 (`float` or `array` of `float`): F1 score or list of f1 scores, depending on the value passed to `average`. Minimum possible value is 0. Maximum possible value is 1. Higher f1 scores are better.\r\n\r\nExamples:\r\n\r\n Example 1-A simple binary example\r\n >>> f1_metric = evaluate.load(\"f1\")\r\n >>> results = f1_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0])\r\n >>> print(results)\r\n {'f1': 0.5}\r\n\r\n Example 2-The same simple binary example as in Example 1, but with `pos_label` set to `0`.\r\n >>> f1_metric = evaluate.load(\"f1\")\r\n >>> results = f1_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], pos_label=0)\r\n >>> print(round(results['f1'], 2))\r\n 0.67\r\n\r\n Example 3-The same simple binary example as in Example 1, but with `sample_weight` included.\r\n >>> f1_metric = evaluate.load(\"f1\")\r\n >>> results = f1_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], sample_weight=[0.9, 0.5, 3.9, 1.2, 0.3])\r\n >>> print(round(results['f1'], 2))\r\n 0.35\r\n\r\n Example 4-A multiclass example, with different values for the `average` input.\r\n >>> predictions = [0, 2, 1, 0, 0, 1]\r\n >>> references = [0, 1, 2, 0, 1, 2]\r\n >>> results = f1_metric.compute(predictions=predictions, references=references, average=\"macro\")\r\n >>> print(round(results['f1'], 2))\r\n 0.27\r\n >>> results = f1_metric.compute(predictions=predictions, references=references, average=\"micro\")\r\n >>> print(round(results['f1'], 2))\r\n 0.33\r\n >>> results = f1_metric.compute(predictions=predictions, references=references, average=\"weighted\")\r\n >>> print(round(results['f1'], 2))\r\n 0.27\r\n >>> results = f1_metric.compute(predictions=predictions, references=references, average=None)\r\n >>> print(results)\r\n {'f1': array([0.8, 0. , 0. ])}\r\n\r\n Example 5-A multi-label example\r\n >>> f1_metric = evaluate.load(\"f1\", \"multilabel\")\r\n >>> results = f1_metric.compute(predictions=[[0, 1, 1], [1, 1, 0]], references=[[0, 1, 1], [0, 1, 0]], average=\"macro\")\r\n >>> print(round(results['f1'], 2))\r\n 0.67\r\n\"\"\", stored examples: 0)\" of type <class 'evaluate_modules.metrics.evaluate-metric--f1.0ca73f6cf92ef5a268320c697f7b940d1030f8471714bffdb6856c641b818974.f1.F1'> for key \"eval/f1\" as a scalar. This invocation of Tensorboard's writer.add_scalar() is incorrect so we dropped this attribute.\r\nTrainer is attempting to log a value of \"EvaluationModule(name: \"recall\", module_type: \"metric\", features: {'predictions': Value(dtype='int32', id=None), 'references': Value(dtype='int32', id=None)}, usage: \"\"\"\r\nArgs:\r\n- **predictions** (`list` of `int`): The predicted labels.\r\n- **references** (`list` of `int`): The ground truth labels.\r\n- **labels** (`list` of `int`): The set of labels to include when `average` is not set to `binary`, and their order when average is `None`. Labels present in the data can be excluded in this input, for example to calculate a multiclass average ignoring a majority negative class, while labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in y_true and y_pred are used in sorted order. Defaults to None.\r\n- **pos_label** (`int`): The class label to use as the 'positive class' when calculating the recall. Defaults to `1`.\r\n- **average** (`string`): This parameter is required for multiclass/multilabel targets. If None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data. Defaults to `'binary'`.\r\n - `'binary'`: Only report results for the class specified by `pos_label`. This is applicable only if the target labels and predictions are binary.\r\n - `'micro'`: Calculate metrics globally by counting the total true positives, false negatives, and false positives.\r\n - `'macro'`: Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.\r\n - `'weighted'`: Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters `'macro'` to account for label imbalance. Note that it can result in an F-score that is not between precision and recall.\r\n - `'samples'`: Calculate metrics for each instance, and find their average (only meaningful for multilabel classification).\r\n- **sample_weight** (`list` of `float`): Sample weights Defaults to `None`.\r\n- **zero_division** (): Sets the value to return when there is a zero division. Defaults to .\r\n - `'warn'`: If there is a zero division, the return value is `0`, but warnings are also raised.\r\n - `0`: If there is a zero division, the return value is `0`.\r\n - `1`: If there is a zero division, the return value is `1`.\r\n\r\nReturns:\r\n- **recall** (`float`, or `array` of `float`): Either the general recall score, or the recall scores for individual classes, depending on the values input to `labels` and `average`. Minimum possible value is 0. Maximum possible value is 1. A higher recall means that more of the positive examples have been labeled correctly. Therefore, a higher recall is generally considered better.\r\n\r\nExamples:\r\n\r\n Example 1-A simple example with some errors\r\n >>> recall_metric = evaluate.load('recall')\r\n >>> results = recall_metric.compute(references=[0, 0, 1, 1, 1], predictions=[0, 1, 0, 1, 1])\r\n >>> print(results)\r\n {'recall': 0.6666666666666666}\r\n\r\n Example 2-The same example as Example 1, but with `pos_label=0` instead of the default `pos_label=1`.\r\n >>> recall_metric = evaluate.load('recall')\r\n >>> results = recall_metric.compute(references=[0, 0, 1, 1, 1], predictions=[0, 1, 0, 1, 1], pos_label=0)\r\n >>> print(results)\r\n {'recall': 0.5}\r\n\r\n Example 3-The same example as Example 1, but with `sample_weight` included.\r\n >>> recall_metric = evaluate.load('recall')\r\n >>> sample_weight = [0.9, 0.2, 0.9, 0.3, 0.8]\r\n >>> results = recall_metric.compute(references=[0, 0, 1, 1, 1], predictions=[0, 1, 0, 1, 1], sample_weight=sample_weight)\r\n >>> print(results)\r\n {'recall': 0.55}\r\n\r\n Example 4-A multiclass example, using different averages.\r\n >>> recall_metric = evaluate.load('recall')\r\n >>> predictions = [0, 2, 1, 0, 0, 1]\r\n >>> references = [0, 1, 2, 0, 1, 2]\r\n >>> results = recall_metric.compute(predictions=predictions, references=references, average='macro')\r\n >>> print(results)\r\n {'recall': 0.3333333333333333}\r\n >>> results = recall_metric.compute(predictions=predictions, references=references, average='micro')\r\n >>> print(results)\r\n {'recall': 0.3333333333333333}\r\n >>> results = recall_metric.compute(predictions=predictions, references=references, average='weighted')\r\n >>> print(results)\r\n {'recall': 0.3333333333333333}\r\n >>> results = recall_metric.compute(predictions=predictions, references=references, average=None)\r\n >>> print(results)\r\n {'recall': array([1., 0., 0.])}\r\n\"\"\", stored examples: 0)\" of type <class 'evaluate_modules.metrics.evaluate-metric--recall.e40e6e98d18ff3f210f4d0b26fa721bfaa80704b1fdf890fa551cfabf94fc185.recall.Recall'> for key \"eval/recall\" as a scalar. This invocation of Tensorboard's writer.add_scalar() is incorrect so we dropped this attribute.\r\nException ignored in: <function BaseFileLock.__del__ at 0x7fb2db3b1160>\r\nTraceback (most recent call last):\r\n File \"/home/master/anaconda3/lib/python3.9/site-packages/datasets/utils/filelock.py\", line 328, in __del__\r\n self.release(force=True)\r\n File \"/home/master/anaconda3/lib/python3.9/site-packages/datasets/utils/filelock.py\", line 304, in release\r\n with self._thread_lock:\r\nAttributeError: 'UnixFileLock' object has no attribute '_thread_lock'\r\n\r\n\r\n\r\n********************\r\n\r\n\r\n\r\nTrainerState(epoch=1.0, global_step=944, max_steps=1888, num_train_epochs=2, total_flos=256413353347800.0, log_history=[{'loss': 0.084, 'learning_rate': 1.4703389830508477e-05, 'epoch': 0.53, 'step': 500}, {'eval_loss': 0.2768215239048004, 'eval_accuracy': EvaluationModule(name: \"accuracy\", module_type: \"metric\", features: {'predictions': Value(dtype='int32', id=None), 'references': Value(dtype='int32', id=None)}, usage: \"\"\"\r\nArgs:\r\n predictions (`list` of `int`): Predicted labels.\r\n references (`list` of `int`): Ground truth labels.\r\n normalize (`boolean`): If set to False, returns the number of correctly classified samples. Otherwise, returns the fraction of correctly classified samples. Defaults to True.\r\n sample_weight (`list` of `float`): Sample weights Defaults to None.\r\n\r\nReturns:\r\n accuracy (`float` or `int`): Accuracy score. Minimum possible value is 0. Maximum possible value is 1.0, or the number of examples input, if `normalize` is set to `True`.. A higher score means higher accuracy.\r\n\r\nExamples:\r\n\r\n Example 1-A simple example\r\n >>> accuracy_metric = evaluate.load(\"accuracy\")\r\n >>> results = accuracy_metric.compute(references=[0, 1, 2, 0, 1, 2], predictions=[0, 1, 1, 2, 1, 0])\r\n >>> print(results)\r\n {'accuracy': 0.5}\r\n\r\n Example 2-The same as Example 1, except with `normalize` set to `False`.\r\n >>> accuracy_metric = evaluate.load(\"accuracy\")\r\n >>> results = accuracy_metric.compute(references=[0, 1, 2, 0, 1, 2], predictions=[0, 1, 1, 2, 1, 0], normalize=False)\r\n >>> print(results)\r\n {'accuracy': 3.0}\r\n\r\n Example 3-The same as Example 1, except with `sample_weight` set.\r\n >>> accuracy_metric = evaluate.load(\"accuracy\")\r\n >>> results = accuracy_metric.compute(references=[0, 1, 2, 0, 1, 2], predictions=[0, 1, 1, 2, 1, 0], sample_weight=[0.5, 2, 0.7, 0.5, 9, 0.4])\r\n >>> print(results)\r\n {'accuracy': 0.8778625954198473}\r\n\"\"\", stored examples: 0), 'eval_precision': EvaluationModule(name: \"precision\", module_type: \"metric\", features: {'predictions': Value(dtype='int32', id=None), 'references': Value(dtype='int32', id=None)}, usage: \"\"\"\r\nArgs:\r\n predictions (`list` of `int`): Predicted class labels.\r\n references (`list` of `int`): Actual class labels.\r\n labels (`list` of `int`): The set of labels to include when `average` is not set to `'binary'`. If `average` is `None`, it should be the label order. Labels present in the data can be excluded, for example to calculate a multiclass average ignoring a majority negative class. Labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in `predictions` and `references` are used in sorted order. Defaults to None.\r\n pos_label (`int`): The class to be considered the positive class, in the case where `average` is set to `binary`. Defaults to 1.\r\n average (`string`): This parameter is required for multiclass/multilabel targets. If set to `None`, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data. Defaults to `'binary'`.\r\n\r\n - 'binary': Only report results for the class specified by `pos_label`. This is applicable only if the classes found in `predictions` and `references` are binary.\r\n - 'micro': Calculate metrics globally by counting the total true positives, false negatives and false positives.\r\n - 'macro': Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.\r\n - 'weighted': Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters `'macro'` to account for label imbalance. This option can result in an F-score that is not between precision and recall.\r\n - 'samples': Calculate metrics for each instance, and find their average (only meaningful for multilabel classification).\r\n sample_weight (`list` of `float`): Sample weights Defaults to None.\r\n zero_division (`int` or `string`): Sets the value to return when there is a zero division. Defaults to 'warn'.\r\n\r\n - 0: Returns 0 when there is a zero division.\r\n - 1: Returns 1 when there is a zero division.\r\n - 'warn': Raises warnings and then returns 0 when there is a zero division.\r\n\r\nReturns:\r\n precision (`float` or `array` of `float`): Precision score or list of precision scores, depending on the value passed to `average`. Minimum possible value is 0. Maximum possible value is 1. Higher values indicate that fewer negative examples were incorrectly labeled as positive, which means that, generally, higher scores are better.\r\n\r\nExamples:\r\n\r\n Example 1-A simple binary example\r\n >>> precision_metric = evaluate.load(\"precision\")\r\n >>> results = precision_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0])\r\n >>> print(results)\r\n {'precision': 0.5}\r\n\r\n Example 2-The same simple binary example as in Example 1, but with `pos_label` set to `0`.\r\n >>> precision_metric = evaluate.load(\"precision\")\r\n >>> results = precision_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], pos_label=0)\r\n >>> print(round(results['precision'], 2))\r\n 0.67\r\n\r\n Example 3-The same simple binary example as in Example 1, but with `sample_weight` included.\r\n >>> precision_metric = evaluate.load(\"precision\")\r\n >>> results = precision_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], sample_weight=[0.9, 0.5, 3.9, 1.2, 0.3])\r\n >>> print(results)\r\n {'precision': 0.23529411764705882}\r\n\r\n Example 4-A multiclass example, with different values for the `average` input.\r\n >>> predictions = [0, 2, 1, 0, 0, 1]\r\n >>> references = [0, 1, 2, 0, 1, 2]\r\n >>> results = precision_metric.compute(predictions=predictions, references=references, average='macro')\r\n >>> print(results)\r\n {'precision': 0.2222222222222222}\r\n >>> results = precision_metric.compute(predictions=predictions, references=references, average='micro')\r\n >>> print(results)\r\n {'precision': 0.3333333333333333}\r\n >>> results = precision_metric.compute(predictions=predictions, references=references, average='weighted')\r\n >>> print(results)\r\n {'precision': 0.2222222222222222}\r\n >>> results = precision_metric.compute(predictions=predictions, references=references, average=None)\r\n >>> print([round(res, 2) for res in results['precision']])\r\n [0.67, 0.0, 0.0]\r\n\"\"\", stored examples: 0), 'eval_f1': EvaluationModule(name: \"f1\", module_type: \"metric\", features: {'predictions': Value(dtype='int32', id=None), 'references': Value(dtype='int32', id=None)}, usage: \"\"\"\r\nArgs:\r\n predictions (`list` of `int`): Predicted labels.\r\n references (`list` of `int`): Ground truth labels.\r\n labels (`list` of `int`): The set of labels to include when `average` is not set to `'binary'`, and the order of the labels if `average` is `None`. Labels present in the data can be excluded, for example to calculate a multiclass average ignoring a majority negative class. Labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in `predictions` and `references` are used in sorted order. Defaults to None.\r\n pos_label (`int`): The class to be considered the positive class, in the case where `average` is set to `binary`. Defaults to 1.\r\n average (`string`): This parameter is required for multiclass/multilabel targets. If set to `None`, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data. Defaults to `'binary'`.\r\n\r\n - 'binary': Only report results for the class specified by `pos_label`. This is applicable only if the classes found in `predictions` and `references` are binary.\r\n - 'micro': Calculate metrics globally by counting the total true positives, false negatives and false positives.\r\n - 'macro': Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.\r\n - 'weighted': Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters `'macro'` to account for label imbalance. This option can result in an F-score that is not between precision and recall.\r\n - 'samples': Calculate metrics for each instance, and find their average (only meaningful for multilabel classification).\r\n sample_weight (`list` of `float`): Sample weights Defaults to None.\r\n\r\nReturns:\r\n f1 (`float` or `array` of `float`): F1 score or list of f1 scores, depending on the value passed to `average`. Minimum possible value is 0. Maximum possible value is 1. Higher f1 scores are better.\r\n\r\nExamples:\r\n\r\n Example 1-A simple binary example\r\n >>> f1_metric = evaluate.load(\"f1\")\r\n >>> results = f1_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0])\r\n >>> print(results)\r\n {'f1': 0.5}\r\n\r\n Example 2-The same simple binary example as in Example 1, but with `pos_label` set to `0`.\r\n >>> f1_metric = evaluate.load(\"f1\")\r\n >>> results = f1_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], pos_label=0)\r\n >>> print(round(results['f1'], 2))\r\n 0.67\r\n\r\n Example 3-The same simple binary example as in Example 1, but with `sample_weight` included.\r\n >>> f1_metric = evaluate.load(\"f1\")\r\n >>> results = f1_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], sample_weight=[0.9, 0.5, 3.9, 1.2, 0.3])\r\n >>> print(round(results['f1'], 2))\r\n 0.35\r\n\r\n Example 4-A multiclass example, with different values for the `average` input.\r\n >>> predictions = [0, 2, 1, 0, 0, 1]\r\n >>> references = [0, 1, 2, 0, 1, 2]\r\n >>> results = f1_metric.compute(predictions=predictions, references=references, average=\"macro\")\r\n >>> print(round(results['f1'], 2))\r\n 0.27\r\n >>> results = f1_metric.compute(predictions=predictions, references=references, average=\"micro\")\r\n >>> print(round(results['f1'], 2))\r\n 0.33\r\n >>> results = f1_metric.compute(predictions=predictions, references=references, average=\"weighted\")\r\n >>> print(round(results['f1'], 2))\r\n 0.27\r\n >>> results = f1_metric.compute(predictions=predictions, references=references, average=None)\r\n >>> print(results)\r\n {'f1': array([0.8, 0. , 0. ])}\r\n\r\n Example 5-A multi-label example\r\n >>> f1_metric = evaluate.load(\"f1\", \"multilabel\")\r\n >>> results = f1_metric.compute(predictions=[[0, 1, 1], [1, 1, 0]], references=[[0, 1, 1], [0, 1, 0]], average=\"macro\")\r\n >>> print(round(results['f1'], 2))\r\n 0.67\r\n\"\"\", stored examples: 0), 'eval_recall': EvaluationModule(name: \"recall\", module_type: \"metric\", features: {'predictions': Value(dtype='int32', id=None), 'references': Value(dtype='int32', id=None)}, usage: \"\"\"\r\nArgs:\r\n- **predictions** (`list` of `int`): The predicted labels.\r\n- **references** (`list` of `int`): The ground truth labels.\r\n- **labels** (`list` of `int`): The set of labels to include when `average` is not set to `binary`, and their order when average is `None`. Labels present in the data can be excluded in this input, for example to calculate a multiclass average ignoring a majority negative class, while labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in y_true and y_pred are used in sorted order. Defaults to None.\r\n- **pos_label** (`int`): The class label to use as the 'positive class' when calculating the recall. Defaults to `1`.\r\n- **average** (`string`): This parameter is required for multiclass/multilabel targets. If None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data. Defaults to `'binary'`.\r\n - `'binary'`: Only report results for the class specified by `pos_label`. This is applicable only if the target labels and predictions are binary.\r\n - `'micro'`: Calculate metrics globally by counting the total true positives, false negatives, and false positives.\r\n - `'macro'`: Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.\r\n - `'weighted'`: Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters `'macro'` to account for label imbalance. Note that it can result in an F-score that is not between precision and recall.\r\n - `'samples'`: Calculate metrics for each instance, and find their average (only meaningful for multilabel classification).\r\n- **sample_weight** (`list` of `float`): Sample weights Defaults to `None`.\r\n- **zero_division** (): Sets the value to return when there is a zero division. Defaults to .\r\n - `'warn'`: If there is a zero division, the return value is `0`, but warnings are also raised.\r\n - `0`: If there is a zero division, the return value is `0`.\r\n - `1`: If there is a zero division, the return value is `1`.\r\n\r\nReturns:\r\n- **recall** (`float`, or `array` of `float`): Either the general recall score, or the recall scores for individual classes, depending on the values input to `labels` and `average`. Minimum possible value is 0. Maximum possible value is 1. A higher recall means that more of the positive examples have been labeled correctly. Therefore, a higher recall is generally considered better.\r\n\r\nExamples:\r\n\r\n Example 1-A simple example with some errors\r\n >>> recall_metric = evaluate.load('recall')\r\n >>> results = recall_metric.compute(references=[0, 0, 1, 1, 1], predictions=[0, 1, 0, 1, 1])\r\n >>> print(results)\r\n {'recall': 0.6666666666666666}\r\n\r\n Example 2-The same example as Example 1, but with `pos_label=0` instead of the default `pos_label=1`.\r\n >>> recall_metric = evaluate.load('recall')\r\n >>> results = recall_metric.compute(references=[0, 0, 1, 1, 1], predictions=[0, 1, 0, 1, 1], pos_label=0)\r\n >>> print(results)\r\n {'recall': 0.5}\r\n\r\n Example 3-The same example as Example 1, but with `sample_weight` included.\r\n >>> recall_metric = evaluate.load('recall')\r\n >>> sample_weight = [0.9, 0.2, 0.9, 0.3, 0.8]\r\n >>> results = recall_metric.compute(references=[0, 0, 1, 1, 1], predictions=[0, 1, 0, 1, 1], sample_weight=sample_weight)\r\n >>> print(results)\r\n {'recall': 0.55}\r\n\r\n Example 4-A multiclass example, using different averages.\r\n >>> recall_metric = evaluate.load('recall')\r\n >>> predictions = [0, 2, 1, 0, 0, 1]\r\n >>> references = [0, 1, 2, 0, 1, 2]\r\n >>> results = recall_metric.compute(predictions=predictions, references=references, average='macro')\r\n >>> print(results)\r\n {'recall': 0.3333333333333333}\r\n >>> results = recall_metric.compute(predictions=predictions, references=references, average='micro')\r\n >>> print(results)\r\n {'recall': 0.3333333333333333}\r\n >>> results = recall_metric.compute(predictions=predictions, references=references, average='weighted')\r\n >>> print(results)\r\n {'recall': 0.3333333333333333}\r\n >>> results = recall_metric.compute(predictions=predictions, references=references, average=None)\r\n >>> print(results)\r\n {'recall': array([1., 0., 0.])}\r\n\"\"\", stored examples: 0), 'eval_runtime': 4.3362, 'eval_samples_per_second': 714.904, 'eval_steps_per_second': 44.739, 'epoch': 1.0, 'step': 944}], best_metric=0.2768215239048004, best_model_checkpoint='./results/checkpoint-944', is_local_process_zero=True, is_world_process_zero=True, is_hyper_param_search=False, trial_name=None, trial_params=None)\r\n\r\n\r\n\r\n********************\r\n```",
"So your metrics are not floats, but one ends up being a whole scikit-learn module, this is why you have the issue. The code you pasted is actually super weird:\r\n\r\n```\r\ndef compute_metrics(eval_pred):\r\n accuracy = load(\"accuracy\")\r\n precision = load(\"precision\")\r\n f1 = load(\"f1\")\r\n recall = load(\"recall\")\r\n \r\n predictions, labels = eval_pred\r\n predictions = np.argmax(predictions, axis=1)\r\n \r\n accuracy.compute(predictions=predictions, references=labels)\r\n precision.compute(predictions=predictions, references=labels, average=\"micro\")\r\n f1.compute(predictions=predictions, references=labels, average=\"micro\")\r\n recall.compute(predictions=predictions, references=labels, average=\"micro\")\r\n \r\n return {\"accuracy\": accuracy, \"precision\": precision, \"f1\": f1, \"recall\": recall}\r\n```\r\nYou compute the results on predictions and labels but don't store it anywhere, instead you return the metric functions (from `evaluate` I guess?) and not the computed values.",
"Great catch! I modified `compute_metrics()` to run successfully without any warnings:\r\n```python\r\ndef compute_metrics(eval_pred):\r\n accuracy = load(\"accuracy\")\r\n precision = load(\"precision\")\r\n f1 = load(\"f1\")\r\n recall = load(\"recall\")\r\n \r\n predictions, labels = eval_pred\r\n predictions = np.argmax(predictions, axis=1)\r\n \r\n \r\n accuracy_ = accuracy.compute(predictions=predictions, references=labels)[\"accuracy\"]\r\n precision_ = precision.compute(predictions=predictions, references=labels, average=\"micro\")[\"precision\"]\r\n f1_ = f1.compute(predictions=predictions, references=labels, average=\"micro\")[\"f1\"]\r\n recall_ = recall.compute(predictions=predictions, references=labels, average=\"micro\")[\"recall\"]\r\n \r\n return {\"accuracy\": accuracy_, \"precision\": precision_, \"f1\": f1_, \"recall\": recall_}\r\n```\r\n\r\nHowever, it doesn't seem like the [results make sense](https://github.com/galenballew/bert-multiclass/blob/main/Screenshot%20from%202023-04-27%2009-35-44.png). That being said, the original issue is definitely no longer an issue. I really appreciate your help--thank you! "
] | 1,682 | 1,682 | 1,682 |
NONE
| null |
### System Info
- `transformers` version: 4.28.1
- Platform: Linux-5.19.0-40-generic-x86_64-with-glibc2.35
- Python version: 3.9.13
- Huggingface_hub version: 0.13.4
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=len(classes)).to('cuda')
training_args = TrainingArguments(
output_dir="./results",
overwrite_output_dir=True,
learning_rate=2e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=2,
optim="adamw_torch",
weight_decay=0.01,
evaluation_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,
no_cuda=False,
skip_memory_metrics=True
)
trainer = Trainer(
model=model,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_dataset,
eval_dataset=val_dataset,
)
trainer.train()
```
Produces the following error:
```
TypeError Traceback (most recent call last)
/tmp/ipykernel_54606/4032920361.py in <module>
----> 1 trainer.train()
~/anaconda3/lib/python3.9/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1660 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size
1661 )
-> 1662 return inner_training_loop(
1663 args=args,
1664 resume_from_checkpoint=resume_from_checkpoint,
~/anaconda3/lib/python3.9/site-packages/transformers/trainer.py in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
2019
2020 self.control = self.callback_handler.on_epoch_end(args, self.state, self.control)
-> 2021 self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
2022
2023 if DebugOption.TPU_METRICS_DEBUG in self.args.debug:
~/anaconda3/lib/python3.9/site-packages/transformers/trainer.py in _maybe_log_save_evaluate(self, tr_loss, model, trial, epoch, ignore_keys_for_eval)
2289
2290 if self.control.should_save:
-> 2291 self._save_checkpoint(model, trial, metrics=metrics)
2292 self.control = self.callback_handler.on_save(self.args, self.state, self.control)
2293
~/anaconda3/lib/python3.9/site-packages/transformers/trainer.py in _save_checkpoint(self, model, trial, metrics)
2405 # Save the Trainer state
2406 if self.args.should_save:
-> 2407 self.state.save_to_json(os.path.join(output_dir, TRAINER_STATE_NAME))
2408
2409 # Save RNG state in non-distributed training
~/anaconda3/lib/python3.9/site-packages/transformers/trainer_callback.py in save_to_json(self, json_path)
95 def save_to_json(self, json_path: str):
96 """Save the content of this instance in JSON format inside `json_path`."""
---> 97 json_string = json.dumps(dataclasses.asdict(self), indent=2, sort_keys=True) + "\n"
98 with open(json_path, "w", encoding="utf-8") as f:
99 f.write(json_string)
~/anaconda3/lib/python3.9/dataclasses.py in asdict(obj, dict_factory)
1073 if not _is_dataclass_instance(obj):
1074 raise TypeError("asdict() should be called on dataclass instances")
-> 1075 return _asdict_inner(obj, dict_factory)
1076
1077
~/anaconda3/lib/python3.9/dataclasses.py in _asdict_inner(obj, dict_factory)
1080 result = []
1081 for f in fields(obj):
-> 1082 value = _asdict_inner(getattr(obj, f.name), dict_factory)
1083 result.append((f.name, value))
1084 return dict_factory(result)
~/anaconda3/lib/python3.9/dataclasses.py in _asdict_inner(obj, dict_factory)
1108 # generator (which is not true for namedtuples, handled
1109 # above).
-> 1110 return type(obj)(_asdict_inner(v, dict_factory) for v in obj)
1111 elif isinstance(obj, dict):
1112 return type(obj)((_asdict_inner(k, dict_factory),
~/anaconda3/lib/python3.9/dataclasses.py in <genexpr>(.0)
1108 # generator (which is not true for namedtuples, handled
1109 # above).
-> 1110 return type(obj)(_asdict_inner(v, dict_factory) for v in obj)
1111 elif isinstance(obj, dict):
1112 return type(obj)((_asdict_inner(k, dict_factory),
~/anaconda3/lib/python3.9/dataclasses.py in _asdict_inner(obj, dict_factory)
1110 return type(obj)(_asdict_inner(v, dict_factory) for v in obj)
1111 elif isinstance(obj, dict):
-> 1112 return type(obj)((_asdict_inner(k, dict_factory),
1113 _asdict_inner(v, dict_factory))
1114 for k, v in obj.items())
~/anaconda3/lib/python3.9/dataclasses.py in <genexpr>(.0)
1111 elif isinstance(obj, dict):
1112 return type(obj)((_asdict_inner(k, dict_factory),
-> 1113 _asdict_inner(v, dict_factory))
1114 for k, v in obj.items())
1115 else:
~/anaconda3/lib/python3.9/dataclasses.py in _asdict_inner(obj, dict_factory)
1114 for k, v in obj.items())
1115 else:
-> 1116 return copy.deepcopy(obj)
1117
1118
~/anaconda3/lib/python3.9/copy.py in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
173
174 # If is its own copy, don't memoize.
~/anaconda3/lib/python3.9/copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
268 if state is not None:
269 if deep:
--> 270 state = deepcopy(state, memo)
271 if hasattr(y, '__setstate__'):
272 y.__setstate__(state)
~/anaconda3/lib/python3.9/copy.py in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
~/anaconda3/lib/python3.9/copy.py in _deepcopy_dict(x, memo, deepcopy)
228 memo[id(x)] = y
229 for key, value in x.items():
--> 230 y[deepcopy(key, memo)] = deepcopy(value, memo)
231 return y
232 d[dict] = _deepcopy_dict
~/anaconda3/lib/python3.9/copy.py in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
~/anaconda3/lib/python3.9/copy.py in _deepcopy_list(x, memo, deepcopy)
203 append = y.append
204 for a in x:
--> 205 append(deepcopy(a, memo))
206 return y
207 d[list] = _deepcopy_list
~/anaconda3/lib/python3.9/copy.py in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
173
174 # If is its own copy, don't memoize.
~/anaconda3/lib/python3.9/copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
268 if state is not None:
269 if deep:
--> 270 state = deepcopy(state, memo)
271 if hasattr(y, '__setstate__'):
272 y.__setstate__(state)
~/anaconda3/lib/python3.9/copy.py in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
~/anaconda3/lib/python3.9/copy.py in _deepcopy_dict(x, memo, deepcopy)
228 memo[id(x)] = y
229 for key, value in x.items():
--> 230 y[deepcopy(key, memo)] = deepcopy(value, memo)
231 return y
232 d[dict] = _deepcopy_dict
~/anaconda3/lib/python3.9/copy.py in deepcopy(x, memo, _nil)
159 reductor = getattr(x, "__reduce_ex__", None)
160 if reductor is not None:
--> 161 rv = reductor(4)
162 else:
163 reductor = getattr(x, "__reduce__", None)
TypeError: cannot pickle '_thread.lock' object
```
### Expected behavior
Training and eval proceed smoothly. I think that Trainer is trying to save the checkpoint and failing then. I'd like to complete training/eval and be able to load from a non-corrupt checkpoint.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22980/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22980/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22979
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22979/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22979/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22979/events
|
https://github.com/huggingface/transformers/issues/22979
| 1,682,551,376 |
I_kwDOCUB6oc5kSbJQ
| 22,979 |
transition scores can be negative infinity
|
{
"login": "myazdani",
"id": 3857492,
"node_id": "MDQ6VXNlcjM4NTc0OTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3857492?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/myazdani",
"html_url": "https://github.com/myazdani",
"followers_url": "https://api.github.com/users/myazdani/followers",
"following_url": "https://api.github.com/users/myazdani/following{/other_user}",
"gists_url": "https://api.github.com/users/myazdani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/myazdani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/myazdani/subscriptions",
"organizations_url": "https://api.github.com/users/myazdani/orgs",
"repos_url": "https://api.github.com/users/myazdani/repos",
"events_url": "https://api.github.com/users/myazdani/events{/privacy}",
"received_events_url": "https://api.github.com/users/myazdani/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @gante ",
"I have found that this problem also occurs when running on GPU, but torch.multinomial behaves as expected on GPU (erroneously sampling elements with prob 0 only happens on CPU with float data type). So I'm not sure why we are seeing -inf scores here. ",
"Hey @myazdani π Thank you for raising the issue!\r\n\r\nUhmmm this is a very annoying PyTorch problem. In practice, we have two options:\r\n1. Wait for PT to fix the issue;\r\n2. Add a workaround ourselves, e.g. sample N tokens at each step and select the first non-`-inf` token. Any workaround will add an execution time overhead, which is also undesirable.\r\n\r\nGiven that the number of problematic tokens is very high (~0.158%[CPU]/~0.000%[GPU] of the tokens in a test run π ), I'll add a workaround ASAP!\r\n\r\n________________________________\r\n\r\nscript used to get the error ratio:\r\n```py\r\nfrom transformers import GPT2Tokenizer, AutoModelForCausalLM\r\nimport torch\r\nfrom tqdm import tqdm\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained(\"distilgpt2\")\r\nmodel = AutoModelForCausalLM.from_pretrained(\"distilgpt2\").to(\"cuda\")\r\ntokenizer.pad_token_id = tokenizer.eos_token_id\r\n# batch size == 1, larger batch sizes have nuances that are not relevant here\r\ninputs = tokenizer([\"Today is\"], return_tensors=\"pt\").to(\"cuda\")\r\n\r\ntorch.manual_seed(10)\r\ninvalid_tokens = 0\r\nfor i in tqdm(range(10000)):\r\n outputs = model.generate(**inputs, max_new_tokens=15, return_dict_in_generate=True,\r\n output_scores=True,\r\n do_sample=True,\r\n temperature=0.9,\r\n top_k=40,\r\n pad_token_id=tokenizer.eos_token_id)\r\n transition_scores = model.compute_transition_scores(\r\n outputs.sequences, outputs.scores, normalize_logits=False\r\n )\r\n\r\n invalid_tokens += torch.isinf(transition_scores).sum().item()\r\n\r\nprint(f\"invalid token ratio: {(invalid_tokens / (10000 * 15))*100:.4f}%\", )\r\n```",
"We got in touch with the PT team, which should give a hand on the problem :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"It looks like the PyTorch multinomial issue has been fixed: https://github.com/pytorch/pytorch/pull/101720\r\n\r\nI verified with the nightly build that the torch.multinomial works as expected. \r\n\r\nHowever, I am still getting -inf values in the transition scores `compute_transition_scores`, cc: @gante @sgugger ",
"Hey @myazdani -- is the script to reproduce the issue with PT nightly still the same as the one at the top?",
"Yes @gante I'm running:\r\n```\r\nfrom transformers import GPT2Tokenizer, AutoModelForCausalLM\r\nimport torch\r\nfrom tqdm import tqdm\r\n\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\nmodel = AutoModelForCausalLM.from_pretrained(\"gpt2\")\r\ntokenizer.pad_token_id = tokenizer.eos_token_id\r\ninputs = tokenizer(5*[\"Today is\"], return_tensors=\"pt\")\r\n\r\ntorch.manual_seed(10)\r\nfor i in range(100):\r\n outputs = model.generate(**inputs, max_new_tokens=15, return_dict_in_generate=True, \r\n output_scores=True, \r\n do_sample=True, \r\n temperature=0.9, \r\n top_k=40, \r\n pad_token_id=tokenizer.eos_token_id)\r\n transition_scores = model.compute_transition_scores(\r\n outputs.sequences, outputs.scores, normalize_logits=False\r\n )\r\n\r\n if torch.isinf(transition_scores).any().item():\r\n print(i)\r\n break \r\n```\r\n\r\nLoop breaks at i=22. Below is my env:\r\n```\r\n- huggingface_hub version: 0.15.1\r\n- Platform: Linux-5.15.107+-x86_64-with-glibc2.31\r\n- Python version: 3.10.12\r\n- Running in iPython ?: Yes\r\n- iPython shell: Shell\r\n- Running in notebook ?: Yes\r\n- Running in Google Colab ?: Yes\r\n- Token path ?: /root/.cache/huggingface/token\r\n- Has saved token ?: False\r\n- Configured git credential helpers: \r\n- FastAI: 2.7.12\r\n- Tensorflow: 2.12.0\r\n- Torch: 2.1.0.dev20230612+cu118\r\n- Jinja2: 3.1.2\r\n- Graphviz: 0.20.1\r\n- Pydot: 1.4.2\r\n- Pillow: 8.4.0\r\n- hf_transfer: N/A\r\n- gradio: N/A\r\n- numpy: 1.25.0rc1\r\n- ENDPOINT: https://huggingface.co/\r\n- HUGGINGFACE_HUB_CACHE: /root/.cache/huggingface/hub\r\n- HUGGINGFACE_ASSETS_CACHE: /root/.cache/huggingface/assets\r\n- HF_TOKEN_PATH: /root/.cache/huggingface/token\r\n- HF_HUB_OFFLINE: False\r\n- HF_HUB_DISABLE_TELEMETRY: False\r\n- HF_HUB_DISABLE_PROGRESS_BARS: None\r\n- HF_HUB_DISABLE_SYMLINKS_WARNING: False\r\n- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False\r\n- HF_HUB_DISABLE_IMPLICIT_TOKEN: False\r\n- HF_HUB_ENABLE_HF_TRANSFER: False\r\n\r\n{'huggingface_hub version': '0.15.1',\r\n 'Platform': 'Linux-5.15.107+-x86_64-with-glibc2.31',\r\n 'Python version': '3.10.12',\r\n 'Running in iPython ?': 'Yes',\r\n 'iPython shell': 'Shell',\r\n 'Running in notebook ?': 'Yes',\r\n 'Running in Google Colab ?': 'Yes',\r\n 'Token path ?': PosixPath('/root/.cache/huggingface/token'),\r\n 'Has saved token ?': False,\r\n 'Configured git credential helpers': '',\r\n 'FastAI': '2.7.12',\r\n 'Tensorflow': '2.12.0',\r\n 'Torch': '2.1.0.dev20230612+cu118',\r\n 'Jinja2': '3.1.2',\r\n 'Graphviz': '0.20.1',\r\n 'Pydot': '1.4.2',\r\n 'Pillow': '8.4.0',\r\n 'hf_transfer': 'N/A',\r\n 'gradio': 'N/A',\r\n 'numpy': '1.25.0rc1',\r\n 'ENDPOINT': 'https://huggingface.co/',\r\n 'HUGGINGFACE_HUB_CACHE': '/root/.cache/huggingface/hub',\r\n 'HUGGINGFACE_ASSETS_CACHE': '/root/.cache/huggingface/assets',\r\n 'HF_TOKEN_PATH': '/root/.cache/huggingface/token',\r\n 'HF_HUB_OFFLINE': False,\r\n 'HF_HUB_DISABLE_TELEMETRY': False,\r\n 'HF_HUB_DISABLE_PROGRESS_BARS': None,\r\n 'HF_HUB_DISABLE_SYMLINKS_WARNING': False,\r\n 'HF_HUB_DISABLE_EXPERIMENTAL_WARNING': False,\r\n 'HF_HUB_DISABLE_IMPLICIT_TOKEN': False,\r\n 'HF_HUB_ENABLE_HF_TRANSFER': False}\r\n```\r\n\r\n",
"@myazdani In this case everything is fine :D \r\n\r\nIf you print the transition scores for the row with `-inf` you see\r\n```\r\n[-128.0551, -120.1372, -91.5507, -70.1001, -88.0797, -100.0098,\r\n -82.7555, -34.6997, -inf, -inf, -inf, -inf, -inf, -inf, -inf],\r\n```\r\nand, if you print the sequence, you see\r\n```\r\n[ 8888, 318, 257, 6507, 640, 329, 262, 1499, 526, 50256,\r\n 50256, 50256, 50256, 50256, 50256, 50256, 50256]\r\n```\r\n\r\n`50256` is the EOS token, so the `-inf` here only exists due to padding :)\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,689 | 1,689 |
NONE
| null |
### System Info
Running transformers 4.28.1 in google colab:
Collecting environment information...
PyTorch version: 2.0.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.25.2
Libc version: glibc-2.31
Python version: 3.9.16 (main, Dec 7 2022, 01:11:51) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.147+-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
Stepping: 0
CPU MHz: 2199.998
BogoMIPS: 4399.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB
L1i cache: 32 KiB
L2 cache: 256 KiB
L3 cache: 55 MiB
NUMA node0 CPU(s): 0,1
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==2.0.0+cu118
[pip3] torchaudio==2.0.1+cu118
[pip3] torchdata==0.6.0
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.15.1
[pip3] torchvision==0.15.1+cu118
[pip3] triton==2.0.0
[conda] Could not collect
### Who can help?
@gante
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The code example is this:
```
from transformers import GPT2Tokenizer, AutoModelForCausalLM
import torch
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = AutoModelForCausalLM.from_pretrained("gpt2")
tokenizer.pad_token_id = tokenizer.eos_token_id
inputs = tokenizer(5*["Today is"], return_tensors="pt")
torch.manual_seed(10)
for i in range(100):
outputs = model.generate(**inputs, max_new_tokens=15, return_dict_in_generate=True,
output_scores=True,
do_sample=True,
temperature=0.9,
top_k=40,
pad_token_id=tokenizer.eos_token_id)
transition_scores = model.compute_transition_scores(
outputs.sequences, outputs.scores, normalize_logits=False
)
if torch.isinf(transition_scores).any().item():
print(i)
break
```
Colab link: https://colab.research.google.com/drive/12KIOKGfZtoChC1ohTlesUWEL6AZT_luo?usp=sharing
### Expected behavior
I expect `transition_scores` to be finite however on my end I see torch.isinf(transition_scores) == True for i = 0.
I traced the issue and it is actually not with compute_transition_scores but originates in the original scores returned in outputs.scores.
The issue specifically happen we use non-greedy sampling approach (ie, do_sample=True). I looked deeper and I think the issue is because of the torch.multinomial selecting tokens with probability 0 (when it shouldn't) but I'm not sure: https://github.com/pytorch/pytorch/issues/48841
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22979/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22979/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22978
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22978/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22978/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22978/events
|
https://github.com/huggingface/transformers/issues/22978
| 1,682,506,726 |
I_kwDOCUB6oc5kSQPm
| 22,978 |
Deadlock in Image Processor of ViT by using OpenMP and Kserve
|
{
"login": "harshyadav17",
"id": 21151610,
"node_id": "MDQ6VXNlcjIxMTUxNjEw",
"avatar_url": "https://avatars.githubusercontent.com/u/21151610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harshyadav17",
"html_url": "https://github.com/harshyadav17",
"followers_url": "https://api.github.com/users/harshyadav17/followers",
"following_url": "https://api.github.com/users/harshyadav17/following{/other_user}",
"gists_url": "https://api.github.com/users/harshyadav17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harshyadav17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harshyadav17/subscriptions",
"organizations_url": "https://api.github.com/users/harshyadav17/orgs",
"repos_url": "https://api.github.com/users/harshyadav17/repos",
"events_url": "https://api.github.com/users/harshyadav17/events{/privacy}",
"received_events_url": "https://api.github.com/users/harshyadav17/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@amyeroberts can you please help on this ?\r\n",
"By searching \"OMP_NUM_THREADS\" \"deadlock\" on Google, it seems it's a general issue when `OMP_NUM_THREADS > 1`. Unfortunately, I am afraid there is no doable fix on `transformers` side.\r\n",
"Furthermore, this issue also involves the usage of `KServe`: it fits better on [HF Forum](https://discuss.huggingface.co/) to see if any user has the same issue and if some workaround is found, or maybe better, on `OpenMP` or `KServe` pages/forums.",
"Hi @ydshieh\r\nthanks for the prompt response.\r\n\r\nI face an interesting correlation with this issue. Whenever we initialise the gcp vision client in the predict method, the pipeline works perfectly.\r\n\r\nThis is how we can replicate the same.\r\n\r\n`from google.cloud import vision`\r\n`os.environ[\"GOOGLE_APPLICATION_CREDENTIALS\"] = \"<path_to_service_account_key>\"`\r\n`_ = vision.ImageAnnotatorClient()`\r\n\r\nIn order to get the inference, following is the code (to be run from another kernel):\r\n\r\n`import requests`\r\n`service_response = requests.post('http://localhost:8095/v1/models/test1234:predict', json={})`\r\n\r\nI hope we can make something out of it and contribute to the open source community. This issue happens with most of the Image Processors of Hugging Face.\r\n\r\nThanks!",
"@harshyadav17 Is it possible for you to remove the parts that involve `Kserve`, and just keep `OMP_NUM_THREADS > 1` to see if the issue is still there? If we can reproduce in this case, it might be much easier to dive into.\r\n\r\nAnd it's also nice to see if `Kserve workers = 1` will give the issue or not.",
"@ydshieh with kserve workers = 1, the script runs perfectly. We won't be able replicate the issue if Kserve is removed from the script. \r\n\r\nMoreover, if we add that gcp vision client, the script works with every setting possible. So can we please have a look at why this is happening. GCP client is completely unrelated over here but the Huggingface processor doesn't show us a deadlock. This can help us in implementing the solution to HF processors.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,685 | 1,685 |
NONE
| null |
### System Info
- `transformers` version: 4.28.0
- Platform: Linux-5.4.0-139-generic-x86_64-with-glibc2.2.5
- Python version: 3.8.13
- Huggingface_hub version: 0.13.4
- Safetensors version: not installed
- PyTorch version (GPU?): 1.13.0+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: Yes (using OpenMP)
- Kserve version: 0.9.0
- g++|gcc (Debian 10.2.1-6) 10.2.1 20210110
- cv2: 4.5.5
- numpy: 1.21.6
### Who can help?
- vision models: @amyeroberts
- tokenizer: @ArthurZucker
- PyTorch: @sgu
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
We face the deadlock at AutoImageProcessor (forward pass) with the following settings:
1. If we use OMP_NUM_THREADS > 1 (to activate intra-op parallelism in pytorch) and Kserve workers > 1
The code works in the following setting:
1. If OMP_NUM_THREADS == 1, then irrespective of the kserve workers the script works, but the inference time increases (by ~2x).
CODE SNIPPET: https://gist.github.com/harshyadav17/149f1c990c17111d8340fcf2e89a5b88
**In the above code, the deadlock is happening at line 67.**
### Expected behavior
Successful model prediction with OpenMP variables for optimised inference.
If we continue with non-deadlock (with OMP_NUM_THREADS = 1) setting then there is an increase in the inference time by 2x. We have setup OMP_NUM_THREADS to decrease the latency. The best latency can be checked with optimum number of OMP_NUM_THREADS (set according to the machine, ideally it is num_physical_cores).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22978/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22978/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22977
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22977/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22977/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22977/events
|
https://github.com/huggingface/transformers/pull/22977
| 1,682,340,805 |
PR_kwDOCUB6oc5PD2GA
| 22,977 |
updated with docker setup
|
{
"login": "soodrohit",
"id": 4641194,
"node_id": "MDQ6VXNlcjQ2NDExOTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4641194?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/soodrohit",
"html_url": "https://github.com/soodrohit",
"followers_url": "https://api.github.com/users/soodrohit/followers",
"following_url": "https://api.github.com/users/soodrohit/following{/other_user}",
"gists_url": "https://api.github.com/users/soodrohit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/soodrohit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/soodrohit/subscriptions",
"organizations_url": "https://api.github.com/users/soodrohit/orgs",
"repos_url": "https://api.github.com/users/soodrohit/repos",
"events_url": "https://api.github.com/users/soodrohit/events{/privacy}",
"received_events_url": "https://api.github.com/users/soodrohit/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
NONE
| null |
added docker files for setting up summary runs as docker images
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22977/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22977/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22977",
"html_url": "https://github.com/huggingface/transformers/pull/22977",
"diff_url": "https://github.com/huggingface/transformers/pull/22977.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22977.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22976
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22976/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22976/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22976/events
|
https://github.com/huggingface/transformers/pull/22976
| 1,682,333,576 |
PR_kwDOCUB6oc5PD0pr
| 22,976 |
Add dummy_inputs for pytorch_version of vision_models
|
{
"login": "kolonist-minjun",
"id": 130522722,
"node_id": "U_kgDOB8eeYg",
"avatar_url": "https://avatars.githubusercontent.com/u/130522722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kolonist-minjun",
"html_url": "https://github.com/kolonist-minjun",
"followers_url": "https://api.github.com/users/kolonist-minjun/followers",
"following_url": "https://api.github.com/users/kolonist-minjun/following{/other_user}",
"gists_url": "https://api.github.com/users/kolonist-minjun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kolonist-minjun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kolonist-minjun/subscriptions",
"organizations_url": "https://api.github.com/users/kolonist-minjun/orgs",
"repos_url": "https://api.github.com/users/kolonist-minjun/repos",
"events_url": "https://api.github.com/users/kolonist-minjun/events{/privacy}",
"received_events_url": "https://api.github.com/users/kolonist-minjun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @kolonist-minjun, thanks for opening this PR! \r\n\r\nThe `dummy_inputs` is a legacy property of the pretrained models and not one we're actively supporting. To use `symbolic_trace`, you can directly pass in the input names: \r\n\r\n```py\r\nfrom transformers import ViTForImageClassification\r\nfrom transformers.utils.fx import symbolic_trace\r\n\r\nmodel = ViTForImageClassification.from_pretrained(\"google/vit-base-patch16-224\")\r\ntraced = symbolic_trace(model, input_names=['pixel_values'])\r\n```",
"Hi @amyeroberts, thanks for the comment!\r\nThe TF version models have dummy_inputs, so I thought it would be good to have them in the PyTorch version models for unification.",
"@kolonist-minjun Yes, it's a bit confusing considering some PyTorch models also have `dummy_inputs` implemented - hopefully once fully deprecated and removed it'll be clearer. We have `dummy_inputs` for the TF models, because Keras models have to be built in order to load pretrained weights. ",
"@amyeroberts Thank you for your comment. I will close this PR!"
] | 1,682 | 1,690 | 1,682 |
NONE
| null |
# What does this PR do?
```
from transformers import ViTForImageClassification
from transformers.utils.fx import symbolic_trace
model = ViTForImageClassification.from_pretrained("google/vit-base-patch16-224")
traced = symbolic_trace(model)
```
```
Traceback (most recent call last):
File "bug_check.py", line 5, in <module>
traced = symbolic_trace(model)
File "/opt/conda/lib/python3.8/site-packages/transformers/utils/fx.py", line 1214, in symbolic_trace
concrete_args = get_concrete_args(model, input_names)
File "/opt/conda/lib/python3.8/site-packages/transformers/utils/fx.py", line 1167, in get_concrete_args
raise ValueError(
ValueError: The model does not have input(s) named: input_ids, expected a subset of the following: pixel_values, head_mask, labels, output_attentions, output_hidden_states, interpolate_pos_encoding, return_dict
```
When using transformers.utils.fx.symbolic_trace,
the pytorch version of vision models throws an error. This is because the default setting of dummy_inputs is "input_ids".
It doesn't matter in TEXT MODELS,
but this problem occurs because
VISION MODELS requires "pixel_values" as a base.
Added dummy_inputs to several PyTorch version models by referring to the dummy_inputs of the Tensorflow version model.
This change fixes the
convnext, convnextv2, resnet, segformer, vit, and vit_hybrid models.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22976/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22976/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22976",
"html_url": "https://github.com/huggingface/transformers/pull/22976",
"diff_url": "https://github.com/huggingface/transformers/pull/22976.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22976.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22975
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22975/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22975/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22975/events
|
https://github.com/huggingface/transformers/issues/22975
| 1,682,203,710 |
I_kwDOCUB6oc5kRGQ-
| 22,975 |
Publish instance types best suited to finetune/inference of a popular model
|
{
"login": "i-am-neo",
"id": 102043285,
"node_id": "U_kgDOBhUOlQ",
"avatar_url": "https://avatars.githubusercontent.com/u/102043285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/i-am-neo",
"html_url": "https://github.com/i-am-neo",
"followers_url": "https://api.github.com/users/i-am-neo/followers",
"following_url": "https://api.github.com/users/i-am-neo/following{/other_user}",
"gists_url": "https://api.github.com/users/i-am-neo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/i-am-neo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/i-am-neo/subscriptions",
"organizations_url": "https://api.github.com/users/i-am-neo/orgs",
"repos_url": "https://api.github.com/users/i-am-neo/repos",
"events_url": "https://api.github.com/users/i-am-neo/events{/privacy}",
"received_events_url": "https://api.github.com/users/i-am-neo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,685 | 1,685 |
NONE
| null |
### Feature request
It would be very helpful to see a chart of instance types on the large public clouds (AWS, GCP, Oracle) most suitable for popular public LLMs like Google Flan-T5 family.
This generalizes a page from @philschmid , whose notebooks indicate how he finetuned certain models using certain instances. You could publish the instance types on the model card broken down by inference and finetuning.
Would this be possible?
### Motivation
To help folks like me to not spin our wheels on trying to locate the most suitable vCPU-GPU combinations. It's a jungle on AWS for sure.
### Your contribution
Happy to help how I can.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22975/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22974
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22974/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22974/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22974/events
|
https://github.com/huggingface/transformers/issues/22974
| 1,682,143,669 |
I_kwDOCUB6oc5kQ3m1
| 22,974 |
Error when running MegaForCausalLM example code in Docs
|
{
"login": "Tylersuard",
"id": 41713505,
"node_id": "MDQ6VXNlcjQxNzEzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/41713505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tylersuard",
"html_url": "https://github.com/Tylersuard",
"followers_url": "https://api.github.com/users/Tylersuard/followers",
"following_url": "https://api.github.com/users/Tylersuard/following{/other_user}",
"gists_url": "https://api.github.com/users/Tylersuard/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tylersuard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tylersuard/subscriptions",
"organizations_url": "https://api.github.com/users/Tylersuard/orgs",
"repos_url": "https://api.github.com/users/Tylersuard/repos",
"events_url": "https://api.github.com/users/Tylersuard/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tylersuard/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hey! Thanks for reporting! This is because the default configuration argument of `bidirectional` is `True`. When setting it to False you reduce the size of the ema matrix. If you still want to use it, ` ignore_mismatched_sizes=True` will help you initialize the model. \r\n",
"Thank you for your response. When I set ignore_mismatched_sizes=True the code works. However, the example code in the docs is still incorrect.",
"@Tylersuard Yep, you're right! Would you like to open a PR to update the docs to get the git contribution for spotting? ",
"@amyeroberts Absolutely!",
"Ok! I just made the PR here. https://github.com/huggingface/transformers/pull/23382"
] | 1,682 | 1,684 | 1,682 |
CONTRIBUTOR
| null |
### System Info
Most recent version of Tranformers from Githup, on Google Colab
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
This is the example code from the documentation for MegaForCausalLM (https://huggingface.co/docs/transformers/main/model_doc/mega):
```python
from transformers import AutoTokenizer, MegaForCausalLM, AutoConfig
import torch
tokenizer = AutoTokenizer.from_pretrained("mnaylor/mega-base-wikitext")
config = AutoConfig.from_pretrained("mnaylor/mega-base-wikitext")
config.is_decoder = True
config.bidirectional = False
model = MegaForCausalLM.from_pretrained("mnaylor/mega-base-wikitext", config=config)
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.logits
```
After installing Transformers from source, when I run the above code snippet on Colab, I get this error:
RuntimeError: Error(s) in loading state_dict for MegaForCausalLM:
size mismatch for mega.layers.0.mega_layer.ema_gate.damping_factor: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]).
size mismatch for mega.layers.0.mega_layer.ema_gate.decay_factor: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]).
size mismatch for mega.layers.0.mega_layer.ema_gate.ema_expansion_matrix: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]).
size mismatch for mega.layers.0.mega_layer.ema_gate.kernel_projection_matrix: copying a param with shape torch.Size([256, 16]) from checkpoint, the shape in current model is torch.Size([128, 16]).
size mismatch for mega.layers.1.mega_layer.ema_gate.damping_factor: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]).
size mismatch for mega.layers.1.mega_layer.ema_gate.decay_factor: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]).
size mismatch for mega.layers.1.mega_layer.ema_gate.ema_expansion_matrix: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]).
size mismatch for mega.layers.1.mega_layer.ema_gate.kernel_projection_matrix: copying a param with shape torch.Size([256, 16]) from checkpoint, the shape in current model is torch.Size([128, 16]).
size mismatch for mega.layers.2.mega_layer.ema_gate.damping_factor: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]).
size mismatch for mega.layers.2.mega_layer.ema_gate.decay_factor: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]).
size mismatch for mega.layers.2.mega_layer.ema_gate.ema_expansion_matrix: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]).
size mismatch for mega.layers.2.mega_layer.ema_gate.kernel_projection_matrix: copying a param with shape torch.Size([256, 16]) from checkpoint, the shape in current model is torch.Size([128, 16]).
size mismatch for mega.layers.3.mega_layer.ema_gate.damping_factor: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]).
size mismatch for mega.layers.3.mega_layer.ema_gate.decay_factor: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]).
size mismatch for mega.layers.3.mega_layer.ema_gate.ema_expansion_matrix: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]).
size mismatch for mega.layers.3.mega_layer.ema_gate.kernel_projection_matrix: copying a param with shape torch.Size([256, 16]) from checkpoint, the shape in current model is torch.Size([128, 16]).
You may consider adding ignore_mismatched_sizes=True in the model from_pretrained method.
### Expected behavior
The pretrained model would load all weights without error
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22974/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22973
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22973/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22973/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22973/events
|
https://github.com/huggingface/transformers/pull/22973
| 1,681,955,077 |
PR_kwDOCUB6oc5PCiNd
| 22,973 |
Add Mask R-CNN
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22973). All of your documentation changes will be reflected on that endpoint.",
"@NielsRogge As @sgugger mentions, the PR is still in WIP state. Happy to review once transformers ready :) ",
"I've updated all docstrings and variable names, PR is ready for another review",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hello @NielsRogge, what is the status of this feature? Thanks in advance",
"Hi, @NielsRogge looking forward to it. Could you, for now, recommend a robust text detector available here to combine with TrOCR. I would like to see how well the two work with the help of HFπ€. "
] | 1,682 | 1,693 | 1,687 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds the classic Mask R-CNN framework for object detection and instance segmentation.
To do/to be discussed:
- [ ] where to place utilities like NMS, loss computation, samplers
- [ ] whether to create dummies for torchvision-backed models
- [ ] how to add support for the object detection pipeline - either add `**kwargs` to each `post_process_object_detection` method, or add specific logic for Mask R-CNN inside `object_detection_pipeline.py`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22973/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22973/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22973",
"html_url": "https://github.com/huggingface/transformers/pull/22973",
"diff_url": "https://github.com/huggingface/transformers/pull/22973.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22973.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22972
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22972/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22972/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22972/events
|
https://github.com/huggingface/transformers/issues/22972
| 1,681,916,567 |
I_kwDOCUB6oc5kQAKX
| 22,972 |
[i18n-PL] Translating docs to Polish
|
{
"login": "nikos-py",
"id": 115744875,
"node_id": "U_kgDOBuYgaw",
"avatar_url": "https://avatars.githubusercontent.com/u/115744875?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nikos-py",
"html_url": "https://github.com/nikos-py",
"followers_url": "https://api.github.com/users/nikos-py/followers",
"following_url": "https://api.github.com/users/nikos-py/following{/other_user}",
"gists_url": "https://api.github.com/users/nikos-py/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nikos-py/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nikos-py/subscriptions",
"organizations_url": "https://api.github.com/users/nikos-py/orgs",
"repos_url": "https://api.github.com/users/nikos-py/repos",
"events_url": "https://api.github.com/users/nikos-py/events{/privacy}",
"received_events_url": "https://api.github.com/users/nikos-py/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
closed
| false | null |
[] |
[] | 1,682 | 1,683 | 1,683 |
NONE
| null |
<!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the Polish-speaking community π (currently 0 out of 267 complete)
Who would want to translate? Please follow the π€ [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers π€).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `PL` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `PL/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review.
* π If you'd like others to help you with the translation, you can also post in the π€ [forums](https://discuss.huggingface.co/).
## Get Started section
- [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180
- [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through)
- [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx).
## Tutorial section
- [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx)
- [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx)
- [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx)
- [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx)
- [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx)
- [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx)
- [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx)
<!--
Keep on adding more as you go π₯
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22972/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22972/timeline
|
not_planned
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22971
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22971/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22971/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22971/events
|
https://github.com/huggingface/transformers/issues/22971
| 1,681,794,765 |
I_kwDOCUB6oc5kPibN
| 22,971 |
RuntimeError: CUDA error: device-side assert triggered
|
{
"login": "sauravtii",
"id": 109907638,
"node_id": "U_kgDOBo0Otg",
"avatar_url": "https://avatars.githubusercontent.com/u/109907638?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sauravtii",
"html_url": "https://github.com/sauravtii",
"followers_url": "https://api.github.com/users/sauravtii/followers",
"following_url": "https://api.github.com/users/sauravtii/following{/other_user}",
"gists_url": "https://api.github.com/users/sauravtii/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sauravtii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sauravtii/subscriptions",
"organizations_url": "https://api.github.com/users/sauravtii/orgs",
"repos_url": "https://api.github.com/users/sauravtii/repos",
"events_url": "https://api.github.com/users/sauravtii/events{/privacy}",
"received_events_url": "https://api.github.com/users/sauravtii/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This usually means there is bad indexing somewhere in your code. You should really bring up the issue with the persons who wrote the tutorials as it looks like there is a bug in their code. You can try to run the code on the CPU to see where the error stems from, or post us a minimal reproducer that doesn't use third-party libraries.",
"But `client_1.py` works well and there isn't much difference in both of their code, only the dataset is different.",
"I ran it on a CPU rather than a GPU and have got some more information about the error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"client_2.py\", line 140, in <module>\r\n main()\r\n File \"client_2.py\", line 136, in main\r\n fl.client.start_numpy_client(server_address=\"localhost:5040\", client=IMDBClient())\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py\", line 208, in start_numpy_client\r\n start_client(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py\", line 142, in start_client\r\n client_message, sleep_duration, keep_going = handle(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/flwr/client/grpc_client/message_handler.py\", line 70, in handle\r\n return _evaluate(client, server_msg.evaluate_ins), 0, True\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/flwr/client/grpc_client/message_handler.py\", line 182, in _evaluate\r\n evaluate_res = client.evaluate(evaluate_ins)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py\", line 276, in _evaluate\r\n results = self.numpy_client.evaluate(parameters, ins.config) # type: ignore\r\n File \"client_2.py\", line 131, in evaluate\r\n loss, accuracy = test(net, testloader)\r\n File \"client_2.py\", line 95, in test\r\n outputs = net(**batch)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/transformers/models/distilbert/modeling_distilbert.py\", line 763, in forward\r\n loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/torch/nn/modules/loss.py\", line 1174, in forward\r\n return F.cross_entropy(input, target, weight=self.weight,\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/torch/nn/functional.py\", line 3029, in cross_entropy\r\n return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)\r\nIndexError: Target -1 is out of bounds.\r\n```",
"This probably means some of your labels are `-1` which is not a valid label. If you were attempting to put a fake label to pad a batch, -100 is the value for this in PyTorch.",
"I am using this dataset - https://huggingface.co/datasets/sst2 and I noticed that the test set has -1 value. How do I remove it?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"use tokenized_datasets[\"validation\"] rather than tokenized_datasets[\"test\"] "
] | 1,682 | 1,702 | 1,685 |
NONE
| null |
### System Info
- `transformers` version: 4.11.3
- Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): 2.10.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker @younesbelkada @sgugger
I am trying out this [tutorial](https://flower.dev/docs/quickstart-huggingface.html) with 2 clients. In the first client, I am using the same dataset as that of the tutorial (IMDB) but for the 2nd client I am using [this](https://huggingface.co/datasets/sst2) dataset. While running the `server.py`, `client_1.py`, and `client_2.py` I am facing the following error while running `client_2.py`. I have also attached all the files below.
Error:
```
Traceback (most recent call last):
File "client_2.py", line 140, in <module>
main()
File "client_2.py", line 136, in main
fl.client.start_numpy_client(server_address="localhost:5040", client=IMDBClient())
File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py", line 208, in start_numpy_client
start_client(
File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py", line 142, in start_client
client_message, sleep_duration, keep_going = handle(
File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/grpc_client/message_handler.py", line 70, in handle
return _evaluate(client, server_msg.evaluate_ins), 0, True
File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/grpc_client/message_handler.py", line 182, in _evaluate
evaluate_res = client.evaluate(evaluate_ins)
File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py", line 276, in _evaluate
results = self.numpy_client.evaluate(parameters, ins.config) # type: ignore
File "client_2.py", line 131, in evaluate
loss, accuracy = test(net, testloader)
File "client_2.py", line 97, in test
loss += outputs.loss.item()
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
`client_1.py` file:
```
from collections import OrderedDict
import warnings
import flwr as fl
import torch
import numpy as np
import random
from torch.utils.data import DataLoader
from datasets import load_dataset, load_metric
from transformers import AutoTokenizer, DataCollatorWithPadding
from transformers import AutoModelForSequenceClassification
from transformers import AdamW
# import os
# os.environ['CUDA_VISIBLE_DEVICES'] = '0,1'
warnings.filterwarnings("ignore", category=UserWarning)
# DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
DEVICE = "cuda:0"
CHECKPOINT = "distilbert-base-uncased" # transformer model checkpoint
def load_data():
"""Load IMDB data (training and eval)"""
raw_datasets = load_dataset("imdb")
raw_datasets = raw_datasets.shuffle(seed=42)
# remove unnecessary data split
del raw_datasets["unsupervised"]
tokenizer = AutoTokenizer.from_pretrained(CHECKPOINT)
def tokenize_function(examples):
return tokenizer(examples["text"], truncation=True)
# random 100 samples
# population = random.sample(range(len(raw_datasets["train"])), 100)
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
# tokenized_datasets["train"] = tokenized_datasets["train"].select(population)
# tokenized_datasets["test"] = tokenized_datasets["test"].select(population)
tokenized_datasets = tokenized_datasets.remove_columns("text")
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
trainloader = DataLoader(
tokenized_datasets["train"],
shuffle=True,
batch_size=32,
collate_fn=data_collator,
)
testloader = DataLoader(
tokenized_datasets["test"], batch_size=32, collate_fn=data_collator
)
return trainloader, testloader
def train(net, trainloader, epochs):
optimizer = AdamW(net.parameters(), lr=5e-5)
net.train()
for i in range(epochs):
print("Epoch: ", i+1)
j = 1
for batch in trainloader:
print("####################### The batch number is: ", j)
batch = {k: v.to(DEVICE) for k, v in batch.items()}
outputs = net(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
j += 1
def test(net, testloader):
metric = load_metric("accuracy")
loss = 0
net.eval()
for batch in testloader:
batch = {k: v.to(DEVICE) for k, v in batch.items()}
with torch.no_grad():
outputs = net(**batch)
logits = outputs.logits
loss += outputs.loss.item()
predictions = torch.argmax(logits, dim=-1)
metric.add_batch(predictions=predictions, references=batch["labels"])
loss /= len(testloader.dataset)
accuracy = metric.compute()["accuracy"]
return loss, accuracy
def main():
net = AutoModelForSequenceClassification.from_pretrained(
CHECKPOINT, num_labels=2
).to(DEVICE)
trainloader, testloader = load_data()
# Flower client
class IMDBClient(fl.client.NumPyClient):
def get_parameters(self, config):
return [val.cpu().numpy() for _, val in net.state_dict().items()]
def set_parameters(self, parameters):
params_dict = zip(net.state_dict().keys(), parameters)
state_dict = OrderedDict({k: torch.Tensor(v) for k, v in params_dict})
net.load_state_dict(state_dict, strict=True)
def fit(self, parameters, config):
self.set_parameters(parameters)
print("Training Started...")
train(net, trainloader, epochs=1)
print("Training Finished.")
return self.get_parameters(config={}), len(trainloader), {}
def evaluate(self, parameters, config):
self.set_parameters(parameters)
loss, accuracy = test(net, testloader)
print({"loss": float(loss), "accuracy": float(accuracy)})
return float(loss), len(testloader), {"loss": float(loss), "accuracy": float(accuracy)}
# Start client
fl.client.start_numpy_client(server_address="localhost:5040", client=IMDBClient())
if __name__ == "__main__":
main()
```
`client_2.py` file:
```
from collections import OrderedDict
import warnings
import flwr as fl
import torch
import numpy as np
import random
from torch.utils.data import DataLoader
from datasets import load_dataset, load_metric
from transformers import AutoTokenizer, DataCollatorWithPadding
from transformers import AutoModelForSequenceClassification
from transformers import AdamW
#from transformers import tokenized_datasets
# import os
# os.environ['CUDA_VISIBLE_DEVICES'] = '2,3'
warnings.filterwarnings("ignore", category=UserWarning)
# DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
DEVICE = "cuda:1"
CHECKPOINT = "distilbert-base-uncased" # transformer model checkpoint
def load_data():
"""Load IMDB data (training and eval)"""
raw_datasets = load_dataset("sst2")
# raw_datasets = load_dataset("yhavinga/imdb_dutch")
raw_datasets = raw_datasets.shuffle(seed=42)
# remove unnecessary data split
del raw_datasets["validation"]
# del raw_datasets["unsupervised"]
tokenizer = AutoTokenizer.from_pretrained(CHECKPOINT)
def tokenize_function(examples):
return tokenizer(examples["sentence"], truncation=True)
# random 100 samples
# population = random.sample(range(len(raw_datasets["train"])), 100)
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
# tokenized_datasets["train"] = tokenized_datasets["train"].select(population)
# tokenized_datasets["test"] = tokenized_datasets["test"].select(population)
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
tokenized_datasets = tokenized_datasets.remove_columns(["idx", "sentence"])
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
trainloader = DataLoader(
tokenized_datasets["train"],
shuffle=True,
batch_size=32,
collate_fn=data_collator,
)
testloader = DataLoader(
tokenized_datasets["test"], batch_size=32, collate_fn=data_collator
)
return trainloader, testloader
def train(net, trainloader, epochs):
optimizer = AdamW(net.parameters(), lr=5e-4)
net.train()
for i in range(epochs):
print("Epoch: ", i+1)
j = 1
# print("####################### The length of the trainloader is: ", len(trainloader))
for batch in trainloader:
print("####################### The batch number is: ", j)
batch = {k: v.to(DEVICE) for k, v in batch.items()}
outputs = net(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
j += 1
def test(net, testloader):
metric = load_metric("accuracy")
loss = 0
net.eval()
for batch in testloader:
batch = {k: v.to(DEVICE) for k, v in batch.items()}
with torch.no_grad():
outputs = net(**batch)
logits = outputs.logits
loss += outputs.loss.item()
predictions = torch.argmax(logits, dim=-1)
metric.add_batch(predictions=predictions, references=batch["labels"])
loss /= len(testloader.dataset)
accuracy = metric.compute()["accuracy"]
return loss, accuracy
def main():
net = AutoModelForSequenceClassification.from_pretrained(
CHECKPOINT, num_labels=2
).to(DEVICE)
trainloader, testloader = load_data()
# Flower client
class IMDBClient(fl.client.NumPyClient):
def get_parameters(self, config):
return [val.cpu().numpy() for _, val in net.state_dict().items()]
def set_parameters(self, parameters):
params_dict = zip(net.state_dict().keys(), parameters)
state_dict = OrderedDict({k: torch.Tensor(v) for k, v in params_dict})
net.load_state_dict(state_dict, strict=True)
def fit(self, parameters, config):
self.set_parameters(parameters)
print("Training Started...")
train(net, trainloader, epochs=1)
print("Training Finished.")
return self.get_parameters(config={}), len(trainloader), {}
def evaluate(self, parameters, config):
self.set_parameters(parameters)
loss, accuracy = test(net, testloader)
print({"loss": float(loss), "accuracy": float(accuracy)})
return float(loss), len(testloader), {"loss": float(loss), "accuracy": float(accuracy)}
# Start client
fl.client.start_numpy_client(server_address="localhost:5040", client=IMDBClient())
if __name__ == "__main__":
main()
```
`server.py` file:
```
import flwr as fl
import torch
from collections import OrderedDict
from logging import WARNING
from flwr.common import (
ndarrays_to_parameters,
parameters_to_ndarrays,
)
from transformers import AutoModelForSequenceClassification
from transformers import AutoTokenizer
from flwr.server.strategy.aggregate import aggregate
from flwr.common.logger import log
from transformers import pipeline
CHECKPOINT = "distilbert-base-uncased" # transformer model checkpoint
DEVICE = torch.device("cpu")
# DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
class Strategy(fl.server.strategy.FedAvg):
def aggregate_fit(
self,
server_round,
results,
failures,
):
if not results:
return None, {}
# Do not aggregate if there are failures and failures are not accepted
if not self.accept_failures and failures:
return None, {}
# Convert results
weights_results = [
(parameters_to_ndarrays(fit_res.parameters), fit_res.num_examples)
for _, fit_res in results
]
self.aggr_weights = aggregate(weights_results)
parameters_aggregated = ndarrays_to_parameters(self.aggr_weights)
# Aggregate custom metrics if aggregation fn was provided
metrics_aggregated = {}
if self.fit_metrics_aggregation_fn:
fit_metrics = [(res.num_examples, res.metrics) for _, res in results]
metrics_aggregated = self.fit_metrics_aggregation_fn(fit_metrics)
elif server_round == 1: # Only log this warning once
log(WARNING, "No fit_metrics_aggregation_fn provided")
return parameters_aggregated, metrics_aggregated
if __name__ == "__main__":
def weighted_average(metrics):
accuracies = [num_examples * m["accuracy"] for num_examples, m in metrics]
losses = [num_examples * m["loss"] for num_examples, m in metrics]
examples = [num_examples for num_examples, _ in metrics]
accuracy = sum(accuracies) / sum(examples)
loss = sum(losses) / sum(examples)
print("Accuracy: ", accuracy)
print("Loss: ", loss)
return {"accuracy": accuracy, "loss": loss}
net = AutoModelForSequenceClassification.from_pretrained(
CHECKPOINT, num_labels=2
).to(DEVICE)
# Define strategy
strategy = Strategy(
min_fit_clients=1,
min_evaluate_clients=1,
min_available_clients=1,
fraction_fit=1.0,
fraction_evaluate=1.0,
evaluate_metrics_aggregation_fn=weighted_average,
)
# Start server
fl.server.start_server(
server_address="localhost:5040",
config=fl.server.ServerConfig(num_rounds=2),
strategy=strategy,
)
params_dict = zip(net.state_dict().keys(), strategy.aggr_weights)
state_dict = OrderedDict({k: torch.Tensor(v) for k, v in params_dict})
net.load_state_dict(state_dict)
classifier = pipeline("sentiment-analysis", model=net, tokenizer=AutoTokenizer.from_pretrained(CHECKPOINT))
positive_1 = "That was amazing!!!"
negative_1 = "I feel so sad about this..."
positive_2 = "I liked it!!"
negative_2 = "I hated it!!"
print(positive_1, classifier(positive_1))
print(negative_1, classifier(negative_1))
print(positive_2, classifier(positive_2))
print(negative_2, classifier(negative_2))
# Dutch inference
dutch_pos_1 = "ik vond de film leuk"
dutch_neg_1 = "Ik haatte de film"
print(dutch_pos_1, classifier(dutch_pos_1))
print(dutch_neg_1, classifier(dutch_neg_1))
torch.save(net.state_dict(), "/home/saurav/quickstart_huggingface/server_model.pt")
```
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the above provided files in 3 separate terminal windows.
### Expected behavior
The training should happen without causing an error.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22971/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22971/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22970
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22970/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22970/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22970/events
|
https://github.com/huggingface/transformers/pull/22970
| 1,681,739,264 |
PR_kwDOCUB6oc5PBzVT
| 22,970 |
TF port of the Segment Anything Model (SAM)
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This is now almost ready to go and the code should be ready for review! Remaining issues:\r\n\r\n- I added a `tol` parameter to the TF-PT equivalence test - 1e-5 is too low for SAM (errors are more like 1e-4, but I used 5e-4 in the test to avoid flakiness). This will require a couple of minor tweaks in other models that are calling that test.\r\n- Cleanup/refactor in the processor, there's probably some code duplication that I can remove.",
"Thanks for the review - about half of the comments relate to the processor code, which is definitely in need of a refactor, yes. Working on that now!",
"@amyeroberts @sgugger I refactored all the changes to the common tests, and just overrode `check_pt_tf_outputs` to change the `tol` in the tests instead - this is much cleaner and resolves most of the issues there. I also refactored the processor, removing the duplicated files and merging methods where appropriate. I think all comments have now been addressed!",
"@gante I think all comments are now addressed, and I added `training` wherever it touched a layer that had training-specific behaviour (which is literally one dropout call)\r\n\r\nAll comments from @amyeroberts and @sgugger should be addressed too - are you okay with going ahead and merging now once tests pass?",
"I think comments are addressed now - are we okay to merge?",
"I'm treating silence as agreement, merging!"
] | 1,682 | 1,684 | 1,684 |
MEMBER
| null |
This is a first draft of the SAM port - will update this PR as I port tests and make sure everything is working okay. It's also a first proof-of-concept for full GPT-4 auto-translation from PyTorch: The entire `modeling_tf_sam.py` file was converted from PyTorch by GPT-4 with the exception of the imports at the top, because I haven't written a prompt for those yet.
Update: I checked over all of the code and fixed the issues in the GPT port. Equivalence tests all look good! This is almost ready to merge, but there are a few small issues left:
- [x] Get saved model creation working and re-enable tests (problem with the serving signature)
- [x] Check for duplication in the processor files - I can probably refactor and simplify things a bit
- [x] Refactor convolutions - `channels_first` doesn't actually work on CPU in TF
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22970/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22970",
"html_url": "https://github.com/huggingface/transformers/pull/22970",
"diff_url": "https://github.com/huggingface/transformers/pull/22970.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22970.patch",
"merged_at": 1684502054000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22969
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22969/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22969/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22969/events
|
https://github.com/huggingface/transformers/issues/22969
| 1,681,709,432 |
I_kwDOCUB6oc5kPNl4
| 22,969 |
Many places are type-annotated as 1-tuple when should be arbitrary length tuple
|
{
"login": "anentropic",
"id": 147840,
"node_id": "MDQ6VXNlcjE0Nzg0MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/147840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anentropic",
"html_url": "https://github.com/anentropic",
"followers_url": "https://api.github.com/users/anentropic/followers",
"following_url": "https://api.github.com/users/anentropic/following{/other_user}",
"gists_url": "https://api.github.com/users/anentropic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anentropic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anentropic/subscriptions",
"organizations_url": "https://api.github.com/users/anentropic/orgs",
"repos_url": "https://api.github.com/users/anentropic/repos",
"events_url": "https://api.github.com/users/anentropic/events{/privacy}",
"received_events_url": "https://api.github.com/users/anentropic/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The type annotations should only be read as a doc helper. They are not exact, and will never be checked by a type checker as Python is not a statically typed language anyway. When we have to decide between complexity of the type annotation and ease of the user/readability, we always pick the later.",
"I mentioned because it confused me, since they currently describe something different from what is returned",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,685 | 1,685 |
NONE
| null |
### System Info
n/a
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I've seen this in a couple of disparate places so I guess the problem is endemic.
Two examples I can point to are:
- https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_outputs.py#L46
the docstring says:
> _"Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer)"_
but it is annotated as: `hidden_states: Optional[Tuple[torch.FloatTensor]]`
- https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_bert.py#L283
annotated as returning `-> Tuple[torch.Tensor]` but the tuple has a varying number of elements:
```python
outputs = (context_layer, attention_probs) if output_attentions else (context_layer,)
if self.is_decoder:
outputs = outputs + (past_key_value,)
return outputs
```
In both cases there are many examples throughout those files.
### Expected behavior
Unlike `list` and `dict` etc, typed tuples have a specific size.
https://mypy.readthedocs.io/en/stable/cheat_sheet_py3.html#useful-built-in-types
the annotation `Tuple[torch.Tensor]` means a tuple with a _single element_ of that type.
for a tuple of varying size it should be annotated `Tuple[torch.Tensor, ...]`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22969/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22968
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22968/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22968/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22968/events
|
https://github.com/huggingface/transformers/issues/22968
| 1,681,616,791 |
I_kwDOCUB6oc5kO2-X
| 22,968 |
Change the probability of generation of certain tokens
|
{
"login": "Oxi84",
"id": 25420033,
"node_id": "MDQ6VXNlcjI1NDIwMDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/25420033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Oxi84",
"html_url": "https://github.com/Oxi84",
"followers_url": "https://api.github.com/users/Oxi84/followers",
"following_url": "https://api.github.com/users/Oxi84/following{/other_user}",
"gists_url": "https://api.github.com/users/Oxi84/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Oxi84/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Oxi84/subscriptions",
"organizations_url": "https://api.github.com/users/Oxi84/orgs",
"repos_url": "https://api.github.com/users/Oxi84/repos",
"events_url": "https://api.github.com/users/Oxi84/events{/privacy}",
"received_events_url": "https://api.github.com/users/Oxi84/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @gante",
"Hi, @Oxi84! I agree with you on the increased flexibility and the possible implementation path. I did something like that for the feature request [here](https://github.com/huggingface/transformers/issues/22168#issue-1624460056). Feel free to react to @gante's [comment](https://github.com/huggingface/transformers/issues/22168#issuecomment-1477998997) if you find it useful",
"Hi @Oxi84 π @iiglesias-asapp said it all, see the links shared :) \r\n\r\n(by the looks of it, it seems the feature will have enough supporters soon!)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,685 | 1,685 |
NONE
| null |
### Feature request
OpenAI has a feature, that allows you to provide list of token ids, together with how much you want to increase or decrease probability of generation of these tokens.
For example if we have id = 112 that has logit= 23.72, we make logit = 23.72 + custom_value. Custom value can have positive or negative value.
This should be a very usefull feature to have, I read the generation documentation and I am pretty sure you still did not implement it.
### Motivation
It is much more flexible that for example, the option to just remove a list of words 100% from generation. And it is useful for increasing the diversity, so input tokens for example are different from output token, even though this can be partially done with encoder_repetition_penalty if I am not wrong. But here you can choose exact words, and sometimes you need to lower or increase probability of generation for just certain words.
### Your contribution
It should not be too difficult to implement, (even though there are always some problems) just in the function for totally removing some words from generation, you should instead of assigning logit of -inf, you replace the current logit value the_curent_logit + the_custom_value, where the custom value is the value users specify.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22968/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22968/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22967
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22967/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22967/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22967/events
|
https://github.com/huggingface/transformers/pull/22967
| 1,681,560,307 |
PR_kwDOCUB6oc5PBMy0
| 22,967 |
Fix `DeepSpeed` CI job link in Past CI
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
COLLABORATOR
| null |
# What does this PR do?
Fix missing `DeepSpeed` CI job link in Past CI. See comment in the change.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22967/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22967/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22967",
"html_url": "https://github.com/huggingface/transformers/pull/22967",
"diff_url": "https://github.com/huggingface/transformers/pull/22967.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22967.patch",
"merged_at": 1682409139000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22966
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22966/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22966/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22966/events
|
https://github.com/huggingface/transformers/pull/22966
| 1,681,527,859 |
PR_kwDOCUB6oc5PBFug
| 22,966 |
fix ValueError message in LlamaAttention
|
{
"login": "othertea",
"id": 124535597,
"node_id": "U_kgDOB2xDLQ",
"avatar_url": "https://avatars.githubusercontent.com/u/124535597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/othertea",
"html_url": "https://github.com/othertea",
"followers_url": "https://api.github.com/users/othertea/followers",
"following_url": "https://api.github.com/users/othertea/following{/other_user}",
"gists_url": "https://api.github.com/users/othertea/gists{/gist_id}",
"starred_url": "https://api.github.com/users/othertea/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/othertea/subscriptions",
"organizations_url": "https://api.github.com/users/othertea/orgs",
"repos_url": "https://api.github.com/users/othertea/repos",
"events_url": "https://api.github.com/users/othertea/events{/privacy}",
"received_events_url": "https://api.github.com/users/othertea/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #22941
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22966/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22966/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22966",
"html_url": "https://github.com/huggingface/transformers/pull/22966",
"diff_url": "https://github.com/huggingface/transformers/pull/22966.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22966.patch",
"merged_at": 1682352125000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22965
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22965/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22965/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22965/events
|
https://github.com/huggingface/transformers/pull/22965
| 1,681,523,289 |
PR_kwDOCUB6oc5PBEvO
| 22,965 |
π [i18n-KO] Fixed `tasks/masked_language_modeling.mdx`
|
{
"login": "HanNayeoniee",
"id": 33839093,
"node_id": "MDQ6VXNlcjMzODM5MDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/33839093?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HanNayeoniee",
"html_url": "https://github.com/HanNayeoniee",
"followers_url": "https://api.github.com/users/HanNayeoniee/followers",
"following_url": "https://api.github.com/users/HanNayeoniee/following{/other_user}",
"gists_url": "https://api.github.com/users/HanNayeoniee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HanNayeoniee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HanNayeoniee/subscriptions",
"organizations_url": "https://api.github.com/users/HanNayeoniee/orgs",
"repos_url": "https://api.github.com/users/HanNayeoniee/repos",
"events_url": "https://api.github.com/users/HanNayeoniee/events{/privacy}",
"received_events_url": "https://api.github.com/users/HanNayeoniee/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"LGTM. If you may, please:\n\n- **amend your commit message to be more descriptive than just `fixed`.**\n (e.g. `fix: docs: missing newline before code block`)\n- (Optional) provide relevant screenshots of your fixes",
"> LGTM. If you may, please:\r\n> \r\n> * **amend your commit message to be more descriptive than just `fixed`.**\r\n> (e.g. `fix: docs: missing newline before code block`)\r\n> * (Optional) provide relevant screenshots of your fixes\r\n\r\nThanks for your review!\r\nI amended the last commit message and added a screenshot in my first comment!"
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #22838

## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Team PseudoLab, may you please review this PR?
@0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
May you please review this PR?
@sgugger, @ArthurZucker, @eunseojo
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22965/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22965/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22965",
"html_url": "https://github.com/huggingface/transformers/pull/22965",
"diff_url": "https://github.com/huggingface/transformers/pull/22965.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22965.patch",
"merged_at": 1682409557000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22964
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22964/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22964/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22964/events
|
https://github.com/huggingface/transformers/pull/22964
| 1,681,509,913 |
PR_kwDOCUB6oc5PBB0n
| 22,964 |
[WIP] Testing safetensors==0.3.1rc1
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Done."
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
Testing
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22964/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22964/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22964",
"html_url": "https://github.com/huggingface/transformers/pull/22964",
"diff_url": "https://github.com/huggingface/transformers/pull/22964.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22964.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22963
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22963/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22963/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22963/events
|
https://github.com/huggingface/transformers/pull/22963
| 1,681,507,870 |
PR_kwDOCUB6oc5PBBXy
| 22,963 |
Install `accelerete@main` in PyTorch Past CI jobs.
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
COLLABORATOR
| null |
# What does this PR do?
Install `accelerete@main` in PyTorch Past CI jobs.
### Context
In ##22393, we added back `deepspeed` in Past CI docker image. Later in #22859, we decide to use `accelerate@main`, but I forgot to apply the same change in Past CI docker file, as I mistakenly thought Past CI doesn't use `accelerate`.
- However, `[deepspeed-testing]` (installed in Past CI docker) includes `accelerate`, and we want it to be `accelerate@main`.
- We can't include `acclerate` it in the docker image, as for the TF Past CI, it will break something
- there was a remark: `accelerate requires torch, and this causes import issues for TF-only testing`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22963/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22963/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22963",
"html_url": "https://github.com/huggingface/transformers/pull/22963",
"diff_url": "https://github.com/huggingface/transformers/pull/22963.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22963.patch",
"merged_at": 1682363946000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22962
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22962/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22962/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22962/events
|
https://github.com/huggingface/transformers/issues/22962
| 1,681,388,438 |
I_kwDOCUB6oc5kN_OW
| 22,962 |
Failed to convert 65B llama to hf weights
|
{
"login": "lijiazheng99",
"id": 44396506,
"node_id": "MDQ6VXNlcjQ0Mzk2NTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/44396506?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lijiazheng99",
"html_url": "https://github.com/lijiazheng99",
"followers_url": "https://api.github.com/users/lijiazheng99/followers",
"following_url": "https://api.github.com/users/lijiazheng99/following{/other_user}",
"gists_url": "https://api.github.com/users/lijiazheng99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lijiazheng99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lijiazheng99/subscriptions",
"organizations_url": "https://api.github.com/users/lijiazheng99/orgs",
"repos_url": "https://api.github.com/users/lijiazheng99/repos",
"events_url": "https://api.github.com/users/lijiazheng99/events{/privacy}",
"received_events_url": "https://api.github.com/users/lijiazheng99/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The error comes directly from `torch.save`, so we can't really help on our side. I have never seen it either :-/",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,685 | 1,685 |
NONE
| null |
### System Info
- `transformers` version: 4.28.0.dev0
- Platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.13.3
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Tried to execute this command to convert the 65B llama weights to hf version.
```bash
python src/transformers/models/llama/convert_llama_weights_to_hf.py --input_dir /directory_contains_a_65B_weights_folder/
--model_size 65B --output_dir /target_directory/65B/
```
I got a RuntimeError during the execution. The weights have been successfully loaded but failed during saving. I found a similar error message in [here](https://discuss.huggingface.co/t/torch-save-with-hugging-face-models-fails/25034), but there's no answer for that. I have checked my disk, and it should have enough space to save the model (223 GB available).
```
Fetching all parameters from the checkpoint at /scratch/users/xxxxx/65B.
Loading the checkpoint in a Llama model.
Loading checkpoint shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 81/81 [03:52<00:00, 2.88s/it]
Saving in the Transformers format.
Traceback (most recent call last):
File "/users/xxxxx/anaconda3/envs/llama/lib/python3.9/site-packages/torch/serialization.py", line 441, in save
_save(obj, opened_zipfile, pickle_module, pickle_protocol)
File "/users/xxxxx/anaconda3/envs/llama/lib/python3.9/site-packages/torch/serialization.py", line 668, in _save
zip_file.write_record(name, storage.data_ptr(), num_bytes)
RuntimeError: [enforce fail at inline_container.cc:471] . PytorchStreamWriter failed writing file data/59: file write failed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/users/xxxxx/transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py", line 279, in <module>
main()
File "/users/xxxxx/transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py", line 267, in main
write_model(
File "/users/xxxxx/transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py", line 230, in write_model
model.save_pretrained(model_path)
File "/users/xxxxx/anaconda3/envs/llama/lib/python3.9/site-packages/transformers/modeling_utils.py", line 1755, in save_pretrained
save_function(shard, os.path.join(save_directory, shard_file))
File "/users/xxxxx/anaconda3/envs/llama/lib/python3.9/site-packages/torch/serialization.py", line 442, in save
return
File "/users/xxxxx/anaconda3/envs/llama/lib/python3.9/site-packages/torch/serialization.py", line 291, in __exit__
self.file_like.write_end_of_file()
RuntimeError: [enforce fail at inline_container.cc:337] . unexpected pos 8497872128 vs 8497872024
```
### Expected behavior
I had no issue converting the 7B and 13B models with the same process.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22962/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22962/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22961
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22961/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22961/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22961/events
|
https://github.com/huggingface/transformers/issues/22961
| 1,681,240,326 |
I_kwDOCUB6oc5kNbEG
| 22,961 |
Contrastive Search does not work at all for Llama 7B
|
{
"login": "Oxi84",
"id": 25420033,
"node_id": "MDQ6VXNlcjI1NDIwMDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/25420033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Oxi84",
"html_url": "https://github.com/Oxi84",
"followers_url": "https://api.github.com/users/Oxi84/followers",
"following_url": "https://api.github.com/users/Oxi84/following{/other_user}",
"gists_url": "https://api.github.com/users/Oxi84/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Oxi84/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Oxi84/subscriptions",
"organizations_url": "https://api.github.com/users/Oxi84/orgs",
"repos_url": "https://api.github.com/users/Oxi84/repos",
"events_url": "https://api.github.com/users/Oxi84/events{/privacy}",
"received_events_url": "https://api.github.com/users/Oxi84/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Actually it does work very well for text generation. It just does not generate anything with this prompt for some reason.",
"Thanks for investigating! ",
"As always thanks for hard work on implementing such cool thing as this one. I will share how much it helped, seem like it increases accuracy, but I will need to check much more examples to tell."
] | 1,682 | 1,682 | 1,682 |
NONE
| null |
### System Info
latest transofrmers and everything, rtx 3090
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
@yxuansu
I tried it with Llama 7B and results are very bad, it does not generate anything. Also i tried with T5 and results are very bad, but it is probably not intended to work with T5.
import transformers
import torch
from transformers import AutoTokenizer,AutoModelForCausalLM,LlamaTokenizer
tokenizer = AutoTokenizer.from_pretrained("huggyllama/llama-7b")
model = AutoModelForCausalLM.from_pretrained(
"huggyllama/llama-7b",
load_in_8bit=False,
torch_dtype=torch.float16,
device_map="auto",
)
input_ids = tokenizer.encode('I enjoy walking with my cute dog', return_tensors='pt').to("cuda")
beam_output = model.generate(
input_ids,
penalty_alpha=0.6,
top_k=4,
max_length=100,
)
print(tokenizer.decode(beam_output[0], skip_special_tokens=True))
This generates just a dot.
I enjoy walking with my cute dog.
### Expected behavior
It should generate text actually
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22961/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22961/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22960
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22960/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22960/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22960/events
|
https://github.com/huggingface/transformers/pull/22960
| 1,681,214,838 |
PR_kwDOCUB6oc5PAA9N
| 22,960 |
Fix TF example in quicktour
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Done!"
] | 1,682 | 1,682 | 1,682 |
MEMBER
| null |
The quicktour example for `prepare_tf_dataset` was passing the `DatasetDict` of all the dataset splits, instead of a single dataset, which threw an error. This PR fixes it!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22960/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22960/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22960",
"html_url": "https://github.com/huggingface/transformers/pull/22960",
"diff_url": "https://github.com/huggingface/transformers/pull/22960.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22960.patch",
"merged_at": 1682353514000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22959
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22959/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22959/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22959/events
|
https://github.com/huggingface/transformers/pull/22959
| 1,681,134,041 |
PR_kwDOCUB6oc5O_vZa
| 22,959 |
[Llama Tokenizer] Fast llama template
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The test `tests/models/llama/test_tokenization_llama.py::LlamaIntegrationTest::test_conversion` is failing since April 27th, which is likely due to this PR. See issue page #23400"
] | 1,682 | 1,684 | 1,682 |
COLLABORATOR
| null |
# What does this PR do?
Adresses #22794 and #22877
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22959/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22959",
"html_url": "https://github.com/huggingface/transformers/pull/22959",
"diff_url": "https://github.com/huggingface/transformers/pull/22959.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22959.patch",
"merged_at": 1682529201000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22958
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22958/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22958/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22958/events
|
https://github.com/huggingface/transformers/pull/22958
| 1,681,130,014 |
PR_kwDOCUB6oc5O_uhK
| 22,958 |
Prepare tests for hfh 0.14
|
{
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
Related to the coming release of `huggingface_hub==0.14.0`. It will break some internal tests. The PR fixes these tests. The plan is to merge this PR right after the new release is made. This will not impact `transformers`'s end users. However, PR contributors will have to rebase their branch once this one is merged.
See related [discussion](https://huggingface.slack.com/archives/C02V5EA0A95/p1682337463368609?thread_ts=1681994202.635609&cid=C02V5EA0A95) (private slack).
cc @sgugger @ydshieh
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22958/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22958",
"html_url": "https://github.com/huggingface/transformers/pull/22958",
"diff_url": "https://github.com/huggingface/transformers/pull/22958.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22958.patch",
"merged_at": 1682343111000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22957
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22957/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22957/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22957/events
|
https://github.com/huggingface/transformers/pull/22957
| 1,681,067,054 |
PR_kwDOCUB6oc5O_ggM
| 22,957 |
[CLAP] Doc nits
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
COLLABORATOR
| null |
# What does this PR do?
The documentation had a few problems ( "Constrastive Laungaue" -> "Contrastive Language"
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22957/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22957/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22957",
"html_url": "https://github.com/huggingface/transformers/pull/22957",
"diff_url": "https://github.com/huggingface/transformers/pull/22957.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22957.patch",
"merged_at": 1682337630000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22956
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22956/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22956/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22956/events
|
https://github.com/huggingface/transformers/pull/22956
| 1,681,047,841 |
PR_kwDOCUB6oc5O_cJz
| 22,956 |
π [i18n-KO] Translated `fast_tokenizers.mdx` to Korean
|
{
"login": "kihoon71",
"id": 75935546,
"node_id": "MDQ6VXNlcjc1OTM1NTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/75935546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kihoon71",
"html_url": "https://github.com/kihoon71",
"followers_url": "https://api.github.com/users/kihoon71/followers",
"following_url": "https://api.github.com/users/kihoon71/following{/other_user}",
"gists_url": "https://api.github.com/users/kihoon71/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kihoon71/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kihoon71/subscriptions",
"organizations_url": "https://api.github.com/users/kihoon71/orgs",
"repos_url": "https://api.github.com/users/kihoon71/repos",
"events_url": "https://api.github.com/users/kihoon71/events{/privacy}",
"received_events_url": "https://api.github.com/users/kihoon71/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@sgugger, @ArthurZucker, @eunseojo May you please review this PR?"
] | 1,682 | 1,685 | 1,685 |
CONTRIBUTOR
| null |
<!-- PRμ μ λͺ©μ "π [i18n-KO] Translated `<your_file>.mdx` to Korean" μΌλ‘ λΆνλ립λλΉ -->
# What does this PR do?
Translated the `fast_tokenizers.mdx` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
<!-- λ©μΈ μ΄μμ κΈ°λ‘μ΄ λ¨μμ! κ°μ§μ°κ΅¬μ 리ν¬λ₯Ό μ¬μ©ν΄ μ°μ΅νμ€λλ μ κ±°ν΄μ£Όμλ©΄ κ°μ¬νκ² μ΅λλ€! :smile: -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
<!-- μ μΆ μ 체ν¬λ¦¬μ€νΈλ‘, κ°μ§μ°κ΅¬μλ§μ 체ν¬λ¦¬μ€νΈλ <details>λ‘ κ°μΈμ λ§λ€μ΄λλ©΄ λ μ’μ κ² κ°μμ. -->
## Who can review?
<!-- κ°μ§μ°κ΅¬μ νμλ€κ³Ό λ¦¬λ·°κ° λλ νμλ§ νκΉ
νμ΄μ€ μ§μλ€μκ² λ¦¬λ·° μμ²νλ μλ μ£Όμμ λ
ΈμΆν΄μ£ΌμΈμ! -->
<!-- @sgugger, @ArthurZucker, @eunseojo May you please review this PR? -->
Team PseudoLab, may you please review this PR?
@0525hhgus, @wonhyeongseo , @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22956/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22956/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22956",
"html_url": "https://github.com/huggingface/transformers/pull/22956",
"diff_url": "https://github.com/huggingface/transformers/pull/22956.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22956.patch",
"merged_at": 1685453260000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22955
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22955/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22955/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22955/events
|
https://github.com/huggingface/transformers/pull/22955
| 1,681,020,562 |
PR_kwDOCUB6oc5O_WTd
| 22,955 |
Generate: Add exception path for Donut
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
MEMBER
| null |
# What does this PR do?
The multimodal generalization added in #22748 added a regression Donut -- Donut is never expecting a BOS token, having a task-specific token in its place.
This PR adds an exception code path to handle it. All related slow tests are now passing.
cc @NielsRogge
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22955/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22955/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22955",
"html_url": "https://github.com/huggingface/transformers/pull/22955",
"diff_url": "https://github.com/huggingface/transformers/pull/22955.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22955.patch",
"merged_at": 1682337956000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22954
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22954/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22954/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22954/events
|
https://github.com/huggingface/transformers/pull/22954
| 1,680,964,664 |
PR_kwDOCUB6oc5O_J7X
| 22,954 |
Add gradient checkpointing to Whisper Flax
|
{
"login": "versae",
"id": 173537,
"node_id": "MDQ6VXNlcjE3MzUzNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/173537?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/versae",
"html_url": "https://github.com/versae",
"followers_url": "https://api.github.com/users/versae/followers",
"following_url": "https://api.github.com/users/versae/following{/other_user}",
"gists_url": "https://api.github.com/users/versae/gists{/gist_id}",
"starred_url": "https://api.github.com/users/versae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/versae/subscriptions",
"organizations_url": "https://api.github.com/users/versae/orgs",
"repos_url": "https://api.github.com/users/versae/repos",
"events_url": "https://api.github.com/users/versae/events{/privacy}",
"received_events_url": "https://api.github.com/users/versae/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the review, @sanchit-gandhi! Should be all good now π.",
"Amazing @versae! Requesting final review before we can get this merged π€",
"Thank you! I learnt a lot π€ "
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
It uses `flax.linen.remat` and follows on PRs #13657 and #17994.
# What does this PR do?
Adds gradient_checkpointing to Flax Whisper models.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sanchit-gandhi @peregilk
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22954/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22954/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22954",
"html_url": "https://github.com/huggingface/transformers/pull/22954",
"diff_url": "https://github.com/huggingface/transformers/pull/22954.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22954.patch",
"merged_at": 1682525957000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22953
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22953/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22953/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22953/events
|
https://github.com/huggingface/transformers/pull/22953
| 1,680,752,395 |
PR_kwDOCUB6oc5O-bzf
| 22,953 |
Decorate `test_codegen_sample_max_time` as flaky
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I agree! Probably we can use something like `@cached_property`: I see this but never use it myself so far.",
"OK. But can I keep `cache_proeprty` as in the current version (it at least avoids loading the checkpoint despite it is downloaded).\r\nI see there are few modeling test files doing this.",
"If you really want to!"
] | 1,682 | 1,682 | 1,682 |
COLLABORATOR
| null |
# What does this PR do?
Decorate `test_codegen_sample_max_time` as flaky: it fails 0-5 times per month.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22953/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22953/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22953",
"html_url": "https://github.com/huggingface/transformers/pull/22953",
"diff_url": "https://github.com/huggingface/transformers/pull/22953.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22953.patch",
"merged_at": 1682342851000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22952
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22952/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22952/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22952/events
|
https://github.com/huggingface/transformers/pull/22952
| 1,680,738,027 |
PR_kwDOCUB6oc5O-Yzt
| 22,952 |
DFFT
|
{
"login": "soma2000-lang",
"id": 56045049,
"node_id": "MDQ6VXNlcjU2MDQ1MDQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/56045049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/soma2000-lang",
"html_url": "https://github.com/soma2000-lang",
"followers_url": "https://api.github.com/users/soma2000-lang/followers",
"following_url": "https://api.github.com/users/soma2000-lang/following{/other_user}",
"gists_url": "https://api.github.com/users/soma2000-lang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/soma2000-lang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/soma2000-lang/subscriptions",
"organizations_url": "https://api.github.com/users/soma2000-lang/orgs",
"repos_url": "https://api.github.com/users/soma2000-lang/repos",
"events_url": "https://api.github.com/users/soma2000-lang/events{/privacy}",
"received_events_url": "https://api.github.com/users/soma2000-lang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22952). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Working",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Working\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
#18004
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22952/timeline
| null | true |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22952",
"html_url": "https://github.com/huggingface/transformers/pull/22952",
"diff_url": "https://github.com/huggingface/transformers/pull/22952.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22952.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22951
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22951/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22951/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22951/events
|
https://github.com/huggingface/transformers/issues/22951
| 1,680,670,062 |
I_kwDOCUB6oc5kLP1u
| 22,951 |
Fine-tune T5 model for Casual Language Modeling(CLM)
|
{
"login": "nanbeitk",
"id": 73277438,
"node_id": "MDQ6VXNlcjczMjc3NDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/73277438?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nanbeitk",
"html_url": "https://github.com/nanbeitk",
"followers_url": "https://api.github.com/users/nanbeitk/followers",
"following_url": "https://api.github.com/users/nanbeitk/following{/other_user}",
"gists_url": "https://api.github.com/users/nanbeitk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nanbeitk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nanbeitk/subscriptions",
"organizations_url": "https://api.github.com/users/nanbeitk/orgs",
"repos_url": "https://api.github.com/users/nanbeitk/repos",
"events_url": "https://api.github.com/users/nanbeitk/events{/privacy}",
"received_events_url": "https://api.github.com/users/nanbeitk/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi, @nanbeitk thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/), as we try to reserve the github issues for feature requests and bug reports.\r\n",
"> Hi, @nanbeitk thanks for raising an issue!\r\n> \r\n> This is a question best placed in our [forums](https://discuss.huggingface.co/), as we try to reserve the github issues for feature requests and bug reports.\r\n\r\nThanks for your remind and i will post it to forums soon.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,685 | 1,685 |
NONE
| null |
Dear all,
I am new to NLP and has some strange questions, I try to explain them clearly.
My goal is to using a specific corpus to fine-tune t5-base model with a casual language modeling, I find this [document](https://huggingface.co/docs/transformers/main/en/tasks/language_modeling#causal-language-modeling) and it use `AutoModelForCasualLM`, but this liabrary just not include series of t5 models.
So my question is:
1. How should I do to finetune t5 model for CLM object? In my understanding, CLM is a process of predicting `token_2` from `token_1` , `token_3` from `token_1, token_2` until the end of input sequence, so i am confused how to finish this process myself.
2. I try to spilt one my train data into something like this (ti == token_i, 1 == eos_token):
input_ids labels
- `[t1, 1, 1, 1, 1, 1, ...]` `[t1, t2, 1, 1, 1, 1, ...]`
- `[t1, t2, 1, 1, 1, 1, ...]` `[t1, t2, t3, 1, 1, 1, ...]`
- `[t1, t2, t3, 1, 1, 1, ...]` `[t1, t2, t3, t4, 1, 1, ...]`
- `[t1, t2, t3, t4, 1, 1, ...]` `[t1, t2, t3, t4, t5, 1, ...]`
The first problem is obvious, the expanded dataset is too large and requires more time to fine-tune; The second problem is that this seems strange, and I don't know if this fulfills the CLM's mission requirements. This is the only idea that i can catch up to solve this problem, does it work?
Thanks!!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22951/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22950
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22950/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22950/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22950/events
|
https://github.com/huggingface/transformers/pull/22950
| 1,680,093,240 |
PR_kwDOCUB6oc5O8Sfo
| 22,950 |
GPTNeoX Flax support
|
{
"login": "OhadRubin",
"id": 4252994,
"node_id": "MDQ6VXNlcjQyNTI5OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4252994?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OhadRubin",
"html_url": "https://github.com/OhadRubin",
"followers_url": "https://api.github.com/users/OhadRubin/followers",
"following_url": "https://api.github.com/users/OhadRubin/following{/other_user}",
"gists_url": "https://api.github.com/users/OhadRubin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OhadRubin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OhadRubin/subscriptions",
"organizations_url": "https://api.github.com/users/OhadRubin/orgs",
"repos_url": "https://api.github.com/users/OhadRubin/repos",
"events_url": "https://api.github.com/users/OhadRubin/events{/privacy}",
"received_events_url": "https://api.github.com/users/OhadRubin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22950). All of your documentation changes will be reflected on that endpoint.",
"Hey @OhadRubin - sorry for the late reply here! How are you getting on with this PR? I see that a lot of the modelling code has already been implemented - happy to do a first pass of this code if you want a preliminary review? We can also look to adding a test file and also make sure all the imports are properly defined (see https://huggingface.co/docs/transformers/add_new_model#stepbystep-recipe-to-add-a-model-to-transformers)",
"Offer for a review still stands if you'd like me to take a look @OhadRubin!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Leaving this one open to the community to complete! Feel free to take up the PR if you come across this and are interested in a Flax model addition. @OhadRubin has made a nice start on porting the model, you can use the Flax GPT Neo code as reference for the fast attention mechanism we use in Transformers Flax: https://github.com/huggingface/transformers/blob/7d150d68ff6eaecc75b446aa06160b6bc8466e38/src/transformers/models/gpt_neo/modeling_flax_gpt_neo.py#L108",
"So I suggested to change `__call__` method of FlaxGPTNeoXAttention to below\r\n\r\n```python\r\n\r\n def __call__(\r\n self,\r\n hidden_states,\r\n attention_mask,\r\n position_ids,\r\n deterministic: bool = True,\r\n init_cache: bool = False,\r\n output_attentions: bool = False,\r\n ): \r\n # Compute QKV\r\n # Attention heads [batch, seq_len, hidden_size]\r\n # --> [batch, seq_len, (num_heads * 3 * head_size)]\r\n qkv = self.query_key_value(hidden_states)\r\n batch, seq_len, _ = qkv.shape\r\n # [batch, seq_len, (num_heads * 3 * head_size)]\r\n # --> [batch, seq_len, num_heads, 3, head_size]\r\n qkv = qkv.reshape([batch, seq_len,self.num_attention_heads,3,self.head_size])\r\n # [batch, seq_len, num_heads, 3, head_size]\r\n # --> [3,batch, seq_len, num_heads, head_size]\r\n qkv = jnp.moveaxis(qkv, source=-2, destination=0)\r\n # [3, batch, seq_len, num_heads, head_size]\r\n # --> [3,batch, num_heads, seq_len, head_size]\r\n qkv = jnp.swapaxes(qkv, 3, 2)\r\n # [3,batch, num_heads, seq_len, head_size]\r\n # --> 3 [batch, num_heads, seq_len, head_size]\r\n query, key, value = qkv\r\n\r\n query_rot = query[..., : self.rotary_ndims]\r\n query_pass = query[..., self.rotary_ndims :]\r\n key_rot = key[..., : self.rotary_ndims]\r\n key_pass = key[..., self.rotary_ndims :]\r\n\r\n cos, sin = self.rotary_emb(value, seq_len=seq_len)\r\n query, key = apply_rotary_pos_embNP(query_rot, key_rot, cos, sin, position_ids)\r\n query = jnp.concatenate((query, query_pass), axis=-1)\r\n key = jnp.concatenate((key, key_pass), axis=-1)\r\n\r\n # revert swap\r\n query, key, value = jnp.swapaxes(query, 1, 2), jnp.swapaxes(key, 1, 2), jnp.swapaxes(value, 1, 2)\r\n query_length, key_length = query.shape[1], key.shape[1]\r\n\r\n if self.has_variable(\"cache\", \"cached_key\"):\r\n mask_shift = self.variables[\"cache\"][\"cache_index\"]\r\n max_decoder_length = self.variables[\"cache\"][\"cached_key\"].shape[1]\r\n causal_mask = lax.dynamic_slice(\r\n self.causal_mask, (0, 0, mask_shift, 0), (1, 1, query_length, max_decoder_length)\r\n )\r\n else:\r\n causal_mask = self.causal_mask[:, :, :query_length, :key_length]\r\n\r\n batch_size = hidden_states.shape[0]\r\n causal_mask = jnp.broadcast_to(causal_mask, (batch_size,) + causal_mask.shape[1:])\r\n\r\n attention_mask = jnp.broadcast_to(jnp.expand_dims(attention_mask, axis=(-3, -2)), causal_mask.shape)\r\n attention_mask = combine_masks(attention_mask, causal_mask)\r\n\r\n # During fast autoregressive decoding, we feed one position at a time,\r\n # and cache the keys and values step by step.\r\n if self.has_variable(\"cache\", \"cached_key\") or init_cache:\r\n key, value, attention_mask = self._concatenate_to_cache(key, value, query, attention_mask)\r\n \r\n # transform boolean mask into float mask\r\n attention_bias = lax.select(\r\n attention_mask > 0,\r\n jnp.full(attention_mask.shape, 0.0).astype(self.dtype),\r\n jnp.full(attention_mask.shape, jnp.finfo(self.dtype).min).astype(self.dtype),\r\n )\r\n attn_weights = dot_product_attention_weights(\r\n query, #jnp.moveaxis(query, source=-3, destination=-2),\r\n key, #jnp.moveaxis(key, source=-3, destination=-2),\r\n bias=attention_bias,\r\n dropout_rng=None,\r\n # dropout_rate=self.config.attn_pdrop,\r\n deterministic=deterministic,\r\n dtype=jnp.promote_types(self.dtype, jnp.float32),\r\n precision=None,\r\n )\r\n attn_output = jnp.einsum(\"bhqk,bkhd->bqhd\", attn_weights, value)\r\n attn_output = self._merge_heads(attn_output)\r\n attn_output = self.dense(attn_output)\r\n\r\n outputs = (attn_output, attn_weights) if output_attentions else (attn_output,)\r\n return outputs\r\n\r\n```",
"This code doesn't differ much from FlaxGPTNeoSelfAttention. Which part is the fast attention mechanism?\r\n@sanchit-gandhi ",
"The logic for constructing a static k/v cache and computing the attention weights efficiently is quite nicely summarised in the Flax GPT Neo attention layer: https://github.com/huggingface/transformers/blob/7d150d68ff6eaecc75b446aa06160b6bc8466e38/src/transformers/models/gpt_neo/modeling_flax_gpt_neo.py#L108\r\n\r\nWe should strive to match this implementation as closely as possible (rather than optimising it again ourselves). It's largely inspired by the Flax attention implementation from T5x: https://github.com/google-research/t5x/blob/eb08ffbdec78e231aab1c747720ffb076f83bf18/t5x/examples/scalable_t5/layers.py#L196\r\n\r\nThis logic can be quite different from PyTorch attention layers, but is much better suited to the static nature of Flax and leverages the Flax dot product attention call. It's great if the current code is by-and-large the same as the reference Flax GPT Neo code, that's a big green tick as far as I'm concerned!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,699 | 1,699 |
NONE
| null |
@sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22950/reactions",
"total_count": 3,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/22950/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22950",
"html_url": "https://github.com/huggingface/transformers/pull/22950",
"diff_url": "https://github.com/huggingface/transformers/pull/22950.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22950.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22949
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22949/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22949/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22949/events
|
https://github.com/huggingface/transformers/pull/22949
| 1,680,039,545 |
PR_kwDOCUB6oc5O8IbB
| 22,949 |
Generate: assisted generation with sample (take 2)
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
MEMBER
| null |
# What does this PR do?
I was writing the blog post about assisted generation and realized that there is a much better way to solve the `do_sample=True` case π Apologies for the repeated review request, but I believe this is a significant upgrade.
In a nutshell, the existing `temperature` argument provides a natural control mechanism for assisted generation with `do_sample=True`, when controlling how flat the distribution is at the sampling step. Lower temperature = high probability tokens become more likely to be sampled = more predictable = more likely to match the candidate tokens from the assistant model = assisted generation works faster.
Compared to the [other PR](https://github.com/huggingface/transformers/pull/22862), which was closed, this approach has the following pros and cons:
- Pros:
- No new argument;
- Behaves exactly like `.sample`, which users already understand well. No new heuristics;
- As fast as the other method for similar randomness levels (see numbers below).
- Cons:
- Internally, more than one sampling step will occur per output token. If we set a seed, the output will be different than `.sample`'s for the same seed. Not a deal breaker per se, but it means subtle bugs may be tough to catch.
## Performance numbers
I've run the [benchmark](https://github.com/gante/huggingface-demos/tree/main/experiments/faster_generation) I've been running for assisted generation, but now for several `temperature` values. The values below are for `facebook/opt-6.7b` as the main model, `facebook/opt-125m` as the assistant model, running `.generate` starting from inputs taken from the C4 test set (i.e. quite random, the dataset I tested where assisted generation struggles the most), on a RTX3090. Note that most LLMs nowadays use temperature between 0.7 and 0.9.
TL;DR -- it's slower than greedy assisted generation, as it is expected, but it will still secure solid speedups e.g. with INT8.
<img width="418" alt="Screenshot 2023-04-23 at 14 58 10" src="https://user-images.githubusercontent.com/12240844/233844046-953999b7-3f9b-4f97-bab6-b4e4f50943ef.png">
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22949/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22949",
"html_url": "https://github.com/huggingface/transformers/pull/22949",
"diff_url": "https://github.com/huggingface/transformers/pull/22949.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22949.patch",
"merged_at": 1682362496000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22948
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22948/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22948/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22948/events
|
https://github.com/huggingface/transformers/issues/22948
| 1,679,951,996 |
I_kwDOCUB6oc5kIgh8
| 22,948 |
MaskFormerSwin shows as unsupported the index
|
{
"login": "joaocmd",
"id": 5345834,
"node_id": "MDQ6VXNlcjUzNDU4MzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5345834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joaocmd",
"html_url": "https://github.com/joaocmd",
"followers_url": "https://api.github.com/users/joaocmd/followers",
"following_url": "https://api.github.com/users/joaocmd/following{/other_user}",
"gists_url": "https://api.github.com/users/joaocmd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joaocmd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joaocmd/subscriptions",
"organizations_url": "https://api.github.com/users/joaocmd/orgs",
"repos_url": "https://api.github.com/users/joaocmd/repos",
"events_url": "https://api.github.com/users/joaocmd/events{/privacy}",
"received_events_url": "https://api.github.com/users/joaocmd/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@joaocmd Huh, that's odd. Thanks for reporting. Following the shared link, on `main` I see that MaskFormer is shown as a supported model. Perhaps you caught it in a weird moment before a patch was applied? \r\n\r\n<img width=\"1025\" alt=\"image\" src=\"https://user-images.githubusercontent.com/22614925/233962005-0e4714bc-3b3f-4548-a964-c0fcfe774b2a.png\">\r\n\r\n",
"^sorry, I just realised that the link went to maskformer but it's `MaskFormerSwin` you're referring to. \r\n",
"MaskFormersSwin is listed as a \"private model\" [here](https://github.com/huggingface/transformers/blob/3d3204c025b6b5de013e07dd364208e28b4d9589/utils/check_repo.py#L50). \r\n\r\nI suspect this is because MaskFormerSwin was added in order to be used as a backbone. @NielsRogge - is this correct? ",
"Yes ideally it shouldn't be in that public list, it's just there to be used for MaskFormer for backwards compatibility purposes.\r\n\r\nUsers can just use our regular Swin in case they want to use it as backbone for MaskFormer.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @amyeroberts and @NielsRogge, should anything be changed so that the model doesn't appear on that public list or should we close this issue?",
"@joaocmd If you want to open a PR to fix, I'd be very happy to review :) I don't think it's critical to address however.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,689 | 1,689 |
NONE
| null |
Hello, is there any reason why the MaskFormerSwin shows as unsupported on the `index` page?
https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx?plain=1#L364
```
| MaskFormer | β | β | β
| β | β |
| MaskFormerSwin | β | β | β | β | β |
| mBART | β
| β
| β
| β
| β
|
```
I think it is implemented in [this file](https://github.com/huggingface/transformers/blob/main/src/transformers/models/maskformer/modeling_maskformer_swin.py).
I also found this PR https://github.com/huggingface/transformers/pull/20344 which seemed like it added the model. The model is also missing from the "Supported models" subsection, but I didn't find its paper so is that part of the reason?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22948/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22948/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22947
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22947/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22947/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22947/events
|
https://github.com/huggingface/transformers/pull/22947
| 1,679,895,709 |
PR_kwDOCUB6oc5O7tOn
| 22,947 |
[Fix Bugs] Fix keys in `_load_pretrained_model`
|
{
"login": "hanrui1sensetime",
"id": 83800577,
"node_id": "MDQ6VXNlcjgzODAwNTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/83800577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hanrui1sensetime",
"html_url": "https://github.com/hanrui1sensetime",
"followers_url": "https://api.github.com/users/hanrui1sensetime/followers",
"following_url": "https://api.github.com/users/hanrui1sensetime/following{/other_user}",
"gists_url": "https://api.github.com/users/hanrui1sensetime/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hanrui1sensetime/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hanrui1sensetime/subscriptions",
"organizations_url": "https://api.github.com/users/hanrui1sensetime/orgs",
"repos_url": "https://api.github.com/users/hanrui1sensetime/repos",
"events_url": "https://api.github.com/users/hanrui1sensetime/events{/privacy}",
"received_events_url": "https://api.github.com/users/hanrui1sensetime/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @sgugger "
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes the bug when `_load_pretrained_model`.
`f'{prefix}.key'` is wrong because the variable `key` is not used is this branch case.
And this bug will lead to load some models failed like BLOOM-176B.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22947/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22947/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22947",
"html_url": "https://github.com/huggingface/transformers/pull/22947",
"diff_url": "https://github.com/huggingface/transformers/pull/22947.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22947.patch",
"merged_at": 1682342932000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22946
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22946/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22946/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22946/events
|
https://github.com/huggingface/transformers/issues/22946
| 1,679,890,150 |
I_kwDOCUB6oc5kIRbm
| 22,946 |
One Question about BlipForConditionalGeneration
|
{
"login": "Yingshu97",
"id": 62641802,
"node_id": "MDQ6VXNlcjYyNjQxODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/62641802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yingshu97",
"html_url": "https://github.com/Yingshu97",
"followers_url": "https://api.github.com/users/Yingshu97/followers",
"following_url": "https://api.github.com/users/Yingshu97/following{/other_user}",
"gists_url": "https://api.github.com/users/Yingshu97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Yingshu97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yingshu97/subscriptions",
"organizations_url": "https://api.github.com/users/Yingshu97/orgs",
"repos_url": "https://api.github.com/users/Yingshu97/repos",
"events_url": "https://api.github.com/users/Yingshu97/events{/privacy}",
"received_events_url": "https://api.github.com/users/Yingshu97/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @Yingshu97 \r\nThanks for the issue! \r\nthere are few things to keep in mind here:\r\n\r\n1- If you want to use `BlipForConditionalGeneration` as a standalone model to retrieve the hidden states and loss value, you need to also pass `input_ids` values, as Blip uses cross attention between textual and visual input. In the provided snippet you did not pass any `input_ids`. If I correctly pass pixel values with a batch size of 2 together with random input ids that have a batch size of 2 it works as expected. The only way to generate captions without having to pass `input_ids` is to call `.generate` method that will initialize the `input_ids` with `decoder_input_ids` and `eos_token_id`.\r\n2- Make sure to use at least the latest release of `transformers`. `pip install --upgrade transformers`\r\n\r\nThe snippet I used is:\r\n```python\r\nimport torch\r\nfrom transformers import BlipForConditionalGeneration\r\n\r\nipt = torch.randn((2, 3, 384, 384))\r\ninput_ids = torch.LongTensor([[0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [10, 11, 12, 13, 14, 15, 16, 17, 18, 19]])\r\nmodel = BlipForConditionalGeneration.from_pretrained(\"Salesforce/blip-image-captioning-base\")\r\n\r\nout = model(pixel_values=ipt, input_ids=input_ids)\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,685 | 1,685 |
NONE
| null |
### System Info
I create a Blip model by:
`BlipModel = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base")`
I want to get the hidden state of the model output. I got this problem:
I just used the `BlipModel(pixel_values = ipt)` for inference part.
I create a dummy input by `ipt = torch.randn((1,3,384,384))`.
When the input batch size is 1, everything works fine.
However, when i tried to change the input's batch size to other number, like 2, `ipt = torch.randn((2,3,384,384))`. Then i got this kind of error:
**ValueError: Expected input batch_size (2) to match target batch_size (1).**
### Who can help?
@younesbelkada, @ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. create a model by `BlipModel = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base")`
2. create a dummy input by `ipt = torch.randn((2,3,384,384))`
3. get the output of the model by `BlipModel(ipt)`
It will get the error.
### Expected behavior
I want to get the hidden state of the model output. I give you the correct output when i set the batch as 1. When I set the batch to other number, it will get error.
```
BlipForConditionalGenerationModelOutput(loss=tensor(nan, grad_fn=<AddBackward0>), decoder_logits=tensor([[[-2.4062, -2.4062, -2.4062, ..., -2.4061, -2.4062, -2.4062]]],
grad_fn=<ViewBackward0>), image_embeds=tensor([[[-0.7771, -0.0999, 0.0320, ..., -0.6212, 0.8770, -0.1978],
[-0.9081, -0.1407, 0.1390, ..., -0.4231, 0.5914, -0.1464],
[-1.0026, 0.0212, 0.4119, ..., -0.5520, 0.5102, -0.1100],
...,
[-1.2060, -0.0290, 0.0165, ..., -0.5280, 0.3483, -0.0130],
[-1.0668, 0.4398, 0.3717, ..., -0.7589, 0.0796, 0.1294],
[-1.0077, -0.2549, -0.1857, ..., -0.5054, 0.6910, -0.2062]]],
grad_fn=<NativeLayerNormBackward0>), last_hidden_state=tensor([[[-0.7771, -0.0999, 0.0320, ..., -0.6212, 0.8770, -0.1978],
[-0.9081, -0.1407, 0.1390, ..., -0.4231, 0.5914, -0.1464],
[-1.0026, 0.0212, 0.4119, ..., -0.5520, 0.5102, -0.1100],
...,
[-1.2060, -0.0290, 0.0165, ..., -0.5280, 0.3483, -0.0130],
[-1.0668, 0.4398, 0.3717, ..., -0.7589, 0.0796, 0.1294],
[-1.0077, -0.2549, -0.1857, ..., -0.5054, 0.6910, -0.2062]]],
grad_fn=<NativeLayerNormBackward0>), hidden_states=None, attentions=None)
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22946/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22945
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22945/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22945/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22945/events
|
https://github.com/huggingface/transformers/pull/22945
| 1,679,834,378 |
PR_kwDOCUB6oc5O7hT2
| 22,945 |
π [i18n-KO] Translated `token_classification.mdx` to Korean
|
{
"login": "0525hhgus",
"id": 47289574,
"node_id": "MDQ6VXNlcjQ3Mjg5NTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/47289574?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/0525hhgus",
"html_url": "https://github.com/0525hhgus",
"followers_url": "https://api.github.com/users/0525hhgus/followers",
"following_url": "https://api.github.com/users/0525hhgus/following{/other_user}",
"gists_url": "https://api.github.com/users/0525hhgus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/0525hhgus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/0525hhgus/subscriptions",
"organizations_url": "https://api.github.com/users/0525hhgus/orgs",
"repos_url": "https://api.github.com/users/0525hhgus/repos",
"events_url": "https://api.github.com/users/0525hhgus/events{/privacy}",
"received_events_url": "https://api.github.com/users/0525hhgus/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> μ λ§ λ§μ λ΄μ©μ΄ λ€μ΄ μλ λ¬Έμμμ§λ§ λλΆμ κΈλ°© μ½μ μ μμμ΅λλ€! π `Named Entity Recognition` -> `κ°μ²΄λͺ
μΈμ`, `dataset`->`λ°μ΄ν°μ
` λ κ°μ§λ₯Ό ν¬ν¨νμ¬ λͺ κ°μ§ μμ μ¬νμ μ μ λ립λλ€. μ λΆν λλ¦¬κ² μ΅λλ€!\r\n\r\nμΈμ¬ν 리뷰 κ°μ¬ν©λλ€ π€\r\nκ°μ¬ μ½λ©νΈλ₯Ό μ λΆ λ¬κ³ μΆμλ°, μλ¦Όμ΄ λ무 λ§μ΄ κ° κ² κ°μμ νλλ§ λ¬κ² μ΅λλ€ π’\r\n\r\n- `entity`λ₯Ό `κ°μ²΄`λ‘ glossaryλ₯Ό ν¬ν¨νμ¬ μμ νμ΅λλ€. ν¨μ¬ μ΄ν΄κ° μ½κ³ μ΅μν΄μ μ’μ΅λλ€!\r\n- `λ°μ΄ν° μΈνΈ`, `μ½λ μ΄ν°`, `νκ° μ§ν` λ¨μ΄λ λ°μνμ΅λλ€! κΌΌκΌΌνκ² λ΄μ£Όμ
μ κ°μ¬ν©λλ€ :)\r\n- μ€ν, λ§μΆ€λ², μμ°μ€λ¬μ΄ λ¬Έμ₯λ λͺ¨λ λ°μνμ΅λλ€. λ₯λνλ‘ λ°κΎΈλκΉ μ μ½νλλ€ π",
"May you please review this PR? π \r\n@sgugger, @ArthurZucker, @eunseojo"
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
Translated the `tasks/token_classification.mdx` file of the documentation to Korean.
Thank you in advance for your review! π
Part of https://github.com/huggingface/transformers/issues/20179
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
<!-- μ μΆ μ 체ν¬λ¦¬μ€νΈλ‘, κ°μ§μ°κ΅¬μλ§μ 체ν¬λ¦¬μ€νΈλ <details>λ‘ κ°μΈμ λ§λ€μ΄λλ©΄ λ μ’μ κ² κ°μμ. -->
## Who can review?
<!-- κ°μ§μ°κ΅¬μ νμλ€κ³Ό λ¦¬λ·°κ° λλ νμλ§ νκΉ
νμ΄μ€ μ§μλ€μκ² λ¦¬λ·° μμ²νλ μλ μ£Όμμ λ
ΈμΆν΄μ£ΌμΈμ! -->
Team PseudoLab, may you please review this PR?
@0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
May you please review this PR?
@sgugger, @ArthurZucker, @eunseojo
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22945/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22945/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22945",
"html_url": "https://github.com/huggingface/transformers/pull/22945",
"diff_url": "https://github.com/huggingface/transformers/pull/22945.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22945.patch",
"merged_at": 1682510175000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22944
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22944/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22944/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22944/events
|
https://github.com/huggingface/transformers/issues/22944
| 1,679,804,833 |
I_kwDOCUB6oc5kH8mh
| 22,944 |
Auto-download is a security hole.
|
{
"login": "freckletonj",
"id": 8399149,
"node_id": "MDQ6VXNlcjgzOTkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8399149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/freckletonj",
"html_url": "https://github.com/freckletonj",
"followers_url": "https://api.github.com/users/freckletonj/followers",
"following_url": "https://api.github.com/users/freckletonj/following{/other_user}",
"gists_url": "https://api.github.com/users/freckletonj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/freckletonj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/freckletonj/subscriptions",
"organizations_url": "https://api.github.com/users/freckletonj/orgs",
"repos_url": "https://api.github.com/users/freckletonj/repos",
"events_url": "https://api.github.com/users/freckletonj/events{/privacy}",
"received_events_url": "https://api.github.com/users/freckletonj/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @freckletonj, thanks for raising this issue. \r\n\r\nWithout knowing which code you're running, it's hard to know what specifically triggered the dataset download (or how unrelated it is). Typically, a dataset would be downloaded if it's requested through the `load_dataset` functionality. However, I see that allenai/c4 dataset [needs to be downloaded through `git clone`](https://huggingface.co/datasets/allenai/c4#how-do-i-download-this). In general, if you've spotted malicious content within a dataset, I'd recommend flagging on the repo (there's already an [open discussion here](https://huggingface.co/datasets/allenai/c4/discussions/2))\r\n\r\nYou can run transformers in a firewalled or offline mode setting `TRANSFORMERS_OFFLINE=1` in your environment. For datasets, this is `HF_DATASETS_OFFLINE=1`. See: https://huggingface.co/docs/transformers/v4.28.1/en/installation#offline-mode. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,685 | 1,685 |
NONE
| null |
### System Info
I just ran a project and it decided to download a completely unrelated dataset, which I didn't want or need. The extraneous download was https://huggingface.co/datasets/allenai/c4, which upon inspection contains 800+ trojan viruses. Are these false positives? I shouldn't have to care unless I'm interested in this specific dataset.
I think any network calls should be strictly opt-in, eg pehaps `HF_NETWORK_ALLOWED=True python whatever.py`
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce:
1. Run any HF model for the first time. It will make network calls, and download datasets and weights.
### Expected behavior
0 network calls are made, unless opted in to.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22944/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22943
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22943/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22943/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22943/events
|
https://github.com/huggingface/transformers/pull/22943
| 1,679,797,781 |
PR_kwDOCUB6oc5O7aE0
| 22,943 |
π [i18n-KO] Translated `tasks/image_captioning.mdx` to Korean
|
{
"login": "sim-so",
"id": 96299403,
"node_id": "U_kgDOBb1piw",
"avatar_url": "https://avatars.githubusercontent.com/u/96299403?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sim-so",
"html_url": "https://github.com/sim-so",
"followers_url": "https://api.github.com/users/sim-so/followers",
"following_url": "https://api.github.com/users/sim-so/following{/other_user}",
"gists_url": "https://api.github.com/users/sim-so/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sim-so/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sim-so/subscriptions",
"organizations_url": "https://api.github.com/users/sim-so/orgs",
"repos_url": "https://api.github.com/users/sim-so/repos",
"events_url": "https://api.github.com/users/sim-so/events{/privacy}",
"received_events_url": "https://api.github.com/users/sim-so/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I quickly done it! π
\r\nWould you review this PR?",
"LGTM! π€ ",
"Happy Wednesday!\r\nCould you review this PR? π\r\n@sgugger, @ArthurZucker, @eunseojo"
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Translated the `tasks/image_captioning.mdx` file of the documentation to Korean.
Thank you in advance for your review!
- [x] μ΄λ―Έμ§ μΊ‘μ
λ Image captioning
- [x] ν¬μΌλͺ¬ BLIP λ°μ΄ν°μ
κ°μ Έμ€κΈ° Load the PokΓ©mon BLIP captions dataset
- [x] λ°μ΄ν°μ
μ μ²λ¦¬ Preprocess the dataset
- [x] κΈ°λ³Έ λͺ¨λΈ κ°μ Έμ€κΈ° Load a base model
- [x] νκ° Evaluate
- [x] νμ΅! Train!
- [x] μΆλ‘ Inference
Part of https://github.com/huggingface/transformers/issues/20179
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
This is a work on progress.
Could you review this PR when I finish this work?
@0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22943/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22943/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22943",
"html_url": "https://github.com/huggingface/transformers/pull/22943",
"diff_url": "https://github.com/huggingface/transformers/pull/22943.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22943.patch",
"merged_at": 1682510099000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22942
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22942/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22942/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22942/events
|
https://github.com/huggingface/transformers/pull/22942
| 1,679,770,243 |
PR_kwDOCUB6oc5O7UU7
| 22,942 |
Raise error if `stride` is too high in `TokenClassificationPipeline`
|
{
"login": "boyleconnor",
"id": 6520892,
"node_id": "MDQ6VXNlcjY1MjA4OTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6520892?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/boyleconnor",
"html_url": "https://github.com/boyleconnor",
"followers_url": "https://api.github.com/users/boyleconnor/followers",
"following_url": "https://api.github.com/users/boyleconnor/following{/other_user}",
"gists_url": "https://api.github.com/users/boyleconnor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/boyleconnor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/boyleconnor/subscriptions",
"organizations_url": "https://api.github.com/users/boyleconnor/orgs",
"repos_url": "https://api.github.com/users/boyleconnor/repos",
"events_url": "https://api.github.com/users/boyleconnor/events{/privacy}",
"received_events_url": "https://api.github.com/users/boyleconnor/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
Users were previously not given a warning if they initialized a `TokenClassificationPipeline` with too high a value for `stride` (`stride` is the value that determines how many tokens overlap between chunks if the user choose to split text into chunks).
Unfortunately, it's also possible for a `stride` to be too high if the tokenizer happens to introduce special tokens (e.g. `bert-base-cased` has a maximum length of `512`, but each window gets `2` special tokens, so the highest valid `stride` is `509`) , but there's apparently no easy way to check this in advance (i.e. before the tokenizer is run as part of the pipeline). I think it might be worth fixing the error message ("`pyo3_runtime.PanicException: assertion failed: stride < max_len`") when a tokenizer is called with too high a value of `stride`, to clarify to users that added special tokens subtract from the effective window size.
I also thought it was worth clarifying slightly the function of the `stride` parameter. The way `stride` works in the context of Huggingface tokenizers is almost the opposite of the way it works in many [other contexts](https://www.kaggle.com/code/ryanholbrook/the-sliding-window/tutorial).
Mostly fixes #22789.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who should review?
@Narsil
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22942/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22942/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22942",
"html_url": "https://github.com/huggingface/transformers/pull/22942",
"diff_url": "https://github.com/huggingface/transformers/pull/22942.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22942.patch",
"merged_at": 1682342870000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22941
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22941/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22941/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22941/events
|
https://github.com/huggingface/transformers/issues/22941
| 1,679,649,778 |
I_kwDOCUB6oc5kHWvy
| 22,941 |
Typo in error message in LlamaAttention
|
{
"login": "othertea",
"id": 124535597,
"node_id": "U_kgDOB2xDLQ",
"avatar_url": "https://avatars.githubusercontent.com/u/124535597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/othertea",
"html_url": "https://github.com/othertea",
"followers_url": "https://api.github.com/users/othertea/followers",
"following_url": "https://api.github.com/users/othertea/following{/other_user}",
"gists_url": "https://api.github.com/users/othertea/gists{/gist_id}",
"starred_url": "https://api.github.com/users/othertea/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/othertea/subscriptions",
"organizations_url": "https://api.github.com/users/othertea/orgs",
"repos_url": "https://api.github.com/users/othertea/repos",
"events_url": "https://api.github.com/users/othertea/events{/privacy}",
"received_events_url": "https://api.github.com/users/othertea/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@othertea, good spot! Would you like to open a PR to fix this? \r\n\r\ncc @ArthurZucker ",
"Yes, I've made a PR!"
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
There's a typo in the `ValueError`'s message on line 219: https://github.com/huggingface/transformers/blob/d04ec99bec8a0b432fc03ed60cea9a1a20ebaf3c/src/transformers/models/llama/modeling_llama.py#L217-L221
It should be `(bsz, self.num_heads, q_len, kv_seq_len)` as it is in line 217.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22941/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22940
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22940/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22940/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22940/events
|
https://github.com/huggingface/transformers/pull/22940
| 1,679,593,159 |
PR_kwDOCUB6oc5O6vyv
| 22,940 |
Add UDOP
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22940). All of your documentation changes will be reflected on that endpoint.",
"hi @NielsRogge thank you for pushing this PR. I haven't had the chance to try yet, but I'm curious if you have an example or have tried to perform a `torch.jit.trace` or `onnx` conversion on UDOP yet? I know with the previous PR that was where I got stuck.",
"@plamb-viso My impression was always that tracing Encoder-Decoder models (e.g. BART) works fine but exporting to ONNX is challenging using jit.trace. There's a research example for BART on how to do that: [Bart + Beam Search to ONNX](https://github.com/huggingface/transformers/tree/main/examples/research_projects/onnx/summarization)\n\nI think this part of the reason the ONNX export is now offloaded into optimum: https://github.com/huggingface/transformers/issues/14222#issuecomment-1432960827",
"Just want to make sure with the UdopProcessor that we need to manually add the task to each input string. For e.g. if I'm doing document classification, I need to add `document classification.` and `[0,0,0,0]` to my words and bboxes before they go through the processor\r\n\r\nFor e.g.:\r\n```python\r\n prompt_text = ['document', 'classification.']\r\n prompt_boxes = [[0,0,0,0],[0,0,0,0]]\r\n processor.tokenizer(text=prompt_text, boxes=prompt_boxes)\r\n```\r\nAnd prepend these input_ids/boxes to the input_ids/boxes that come out of the `processor`\r\n\r\n(Note that i am using apply_ocr=False)",
"Also curious how we should encode the label of a training example. Is it a part of the inputs to `UdopProcessor`?\r\n\r\nThe I-Code example appears to do it [like this](https://github.com/microsoft/i-Code/blob/main/i-Code-Doc/core/datasets/collate_supervised.py#L33)",
"thanks @dtiarks looks like a key component of that script is the [BartBeamSearchGenerator](https://github.com/huggingface/transformers/blob/main/examples/research_projects/onnx/summarization/run_onnx_exporter.py#L108) which allows you to convert it to torchscript. Will UDOP have something like this?\r\n\r\nI tried some of the naive steps I tried in [this comment](https://github.com/huggingface/transformers/pull/21239#discussion_r1129957024) for tracing this new UDOP PR. Looks like the same issues remain. Curious if we'll get a test/example of tracing/compiling/onnx exporting the model either here or in optimum?\r\n\r\n**EDIT** just a naive try at onnx export in optimum:\r\n```KeyError: \"udop is not supported yet.```\r\n\r\nAnd just for completeness, a `torch.onnx.export` gives:\r\n\r\n```shell\r\nRuntimeError: 0 INTERNAL ASSERT FAILED at \"/Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/jit/ir/alias_analysis.cpp\":621, please report a bug to PyTorch. We don't have an op for aten::full_like but it isn't a special case. Argument types: Tensor, bool, int, int, Device, bool, NoneType,\r\n\r\nCandidates:\r\n\taten::full_like(Tensor self, Scalar fill_value, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor\r\n\taten::full_like.out(Tensor self, Scalar fill_value, *, MemoryFormat? memory_format=None, Tensor(a!) out) -> Tensor(a!)\r\n```",
"@plamb-viso Here is the guide to add ONNX export support for a new architecture in Optimum: https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/contribute\r\nFeel free to open a PR there and we'll help you if you encounter any issue :slightly_smiling_face: ",
"Highly anticipating this release! :) Keep up the great work",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"> This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\r\n> \r\n> Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.\r\n\r\nDefinitely still highly interested in this work",
"@ArthurZucker does https://github.com/huggingface/transformers/pull/24565 fix the remaining issues of this PR?",
"not sure it does no! The added tokens was the issue if I remember correctly ",
"Ok. The question is how we can move this PR forward? @plamb-viso, @Jordy-VL, I (and probably others) are still definitely interested in this.\r\n\r\n@NielsRogge are you aware of other issues blocking this PR or do you have other priorities at the moment?",
"My current priority is #24629, then it will be the tokenizer PR which seems to be the last blocking factor. In the mean time I think that it should be good to get all the tests green and ask for a review to make it ready for a final one! The tokenizer can be updated after wards π€ sorry for the wait π \r\n",
"No worries @ArthurZucker βΊοΈ. My comment was not meant to push anyone. I was just interested if I could contribute to speed up the process.",
"@ArthurZucker the tokenizer is the only thing left to make all tests green. The PR is ready other than that. The only issue that is remaining are the sentinel tokens that the UDOP author defined (T5 has 100 of them, UDOP a lot more). Those are actually only relevant during pre-training, not during fine-tuning. Hence the model is already perfectly usable.\r\n\r\nI can only assign core maintainers for review when the CI is more or less green, so will do that once the tokenizer issue is fixed.",
"Hi @NielsRogge, are you planning to do one of your wonderful notebook tutorials once this PR is closed? I'm rather curios on how can we approach a token-classification task with a encoder-decoder architecture such as UDOP :)",
"> Hi @NielsRogge, are you planning to do one of your wonderful notebook tutorials once this PR is closed? I'm rather curios on how can we approach a token-classification task with a encoder-decoder architecture such as UDOP :)\r\n\r\nYou can already check pix2struct ;) ",
"Ok! Let me have a second look at the tokenizer then! There are quite a few issues currently with `spm` and `AddedToken` being taken care of! ",
"You have to manually add the tokens, and that can't be done in the init with the current API, but this allows us to remove the crazy regex in encoding.\r\n",
"Eagerly anticipating this PR being merged. Is there any information on priority of this work and rough timelines? Thank you @ArthurZucker and @NielsRogge for your great work. ",
"Regarding the priority, not really sure. I won't really have time to dive deep in this before a few weeks. If a contributor wants to work on this feel free to take over! ",
"Update: we're down to 2 failing tests:\r\n```\r\nFAILED tests/models/udop/test_processor_udop.py::UdopProcessorTest::test_save_load_pretrained_default - AssertionError: {'βbacking': 16057, 'βBrunswick': 29980, 'S[629176 chars]7501} != {'<pad>': 0, '</s>': 1, '<unk>': 2, 'β': 3,[624686 chars]4401}\r\nFAILED tests/models/udop/test_tokenization_udop.py::UdopTokenizationTest::test_save_slow_from_fast_and_reload_fast - ValueError: Non-consecutive added token '('<extra_id_99>', 0.0)' found. Should have index 34602 but has index 33201 in saved vocabulary.\r\n```\r\n@ArthurZucker can you clarify how you pushed https://huggingface.co/ArthurZ/udop?",
"Eagerly anticipating this PR being merged. Hope there will be a pretrain demo too. ",
"Will have a look and try to re-upload a working tokenizer! ",
"Eagerly anticipating this PR being merged. Thanks very much for the great work!",
"How I added the tokenizer: (removed the convert token to id logic of regexes)\r\n```python \r\n>>> from transformers import UdopTokenizer\r\n>>> tokenizer = UdopTokenizer(\"ArthurZ/udop/spiece.model\")\r\n>>> tokenizer.add_tokens(tokenizer.additional_special_tokens)\r\n```\r\nthis currently gives `wrong index` issues so trying to fix now!\r\n\r\nNothing really works as expected, if we can just wait for #23909 (end of week max ETA) this will be easy! \r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Definitely still interested!",
"+1"
] | 1,682 | 1,708 | null |
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds UDOP as described in [Unifying Vision, Text, and Layout for Universal Document Processing](https://arxiv.org/abs/2212.02623).
The model can be seen as an encoder-decoder Transformer with LayoutLMv3 as encoder and a T5 text decoder.
Fixes #20650
To do:
- [x] fix `tests/models/udop/test_processor_udop.py::UdopProcessorTest::test_save_load_pretrained_default`
- [x] include pytesseract decodings in processor test
- [ ] check forward signature of the model as we can't change this afterwards
- [ ] update organization to `microsoft`, replace `ArthurZ/udop` everywhere by an official UDOP checkpoint
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22940/reactions",
"total_count": 13,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 8,
"rocket": 5,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22940/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22940",
"html_url": "https://github.com/huggingface/transformers/pull/22940",
"diff_url": "https://github.com/huggingface/transformers/pull/22940.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22940.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22939
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22939/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22939/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22939/events
|
https://github.com/huggingface/transformers/issues/22939
| 1,679,565,168 |
I_kwDOCUB6oc5kHCFw
| 22,939 |
AttributeError: 'MarianMTModel' object has no attribute 'generation_config'
|
{
"login": "kigenchesire",
"id": 121757977,
"node_id": "U_kgDOB0HhGQ",
"avatar_url": "https://avatars.githubusercontent.com/u/121757977?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kigenchesire",
"html_url": "https://github.com/kigenchesire",
"followers_url": "https://api.github.com/users/kigenchesire/followers",
"following_url": "https://api.github.com/users/kigenchesire/following{/other_user}",
"gists_url": "https://api.github.com/users/kigenchesire/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kigenchesire/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kigenchesire/subscriptions",
"organizations_url": "https://api.github.com/users/kigenchesire/orgs",
"repos_url": "https://api.github.com/users/kigenchesire/repos",
"events_url": "https://api.github.com/users/kigenchesire/events{/privacy}",
"received_events_url": "https://api.github.com/users/kigenchesire/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @kigenchesire, thanks for raising an issue! \r\n\r\nSo that we can best help you, can you make sure to follow the issue template and share: \r\n* A full traceback of the error\r\n* A code example that we can run to reproduce the error\r\n* The running environment: run `transformers-cli env` in the terminal and copy-paste the output",
"I am finetuning a translation model using pytorch \r\nafter fine-tuning and saving it using torch.save(model , 'model24.pt')\r\nwhen i try to deploy the model on streamlit using this code \r\ntokenizer = AutoTokenizer.from_pretrained(\"Helsinki-NLP/opus-mt-en-swc\")\r\ndef translate(text):\r\ntranslated = model22.generate(**tokenizer(text, return_tensors=\"pt\", padding=True).to(\"cpu\"))\r\nreturn [tokenizer.decode(t, skip_special_tokens=True) for t in translated][0\r\nI run into [MarianMTModel' object has no attribute 'generation_config']\r\nThe model does well on colab notebook",
"@kigenchesire When saving a transformers model, it's recommended to use `model.save_pretrained(checkpoint_name)`. This ensures everything, including any necessary files such as the model config are saved alongside the weights. ",
"\r\nTL;DR; `pip install transformers<4.28`\r\n\r\n\r\n---\r\n\r\nWith different project, had same error message.\r\n\r\n\r\n(Maybe unexpectedly?) It seems to break backward compatibility, so that makes issue on several project that was built on old `TrainingArguments`.\r\n\r\nIn my case, with https://github.com/alexa/massive, error log was like\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"massive/scripts/train.py\", line 102, in <module>\r\n main()\r\n File \"massive/scripts/train.py\", line 89, in main\r\n trainer = trainer_cls(\r\n File \"massive/src/massive/utils/trainer.py\", line 264, in __init__\r\n super().__init__(*args, **kwargs)\r\n File \"transformers/trainer_seq2seq.py\", line 72, in __init__\r\n if self.args.generation_config is not None:\r\nAttributeError: 'MASSIVETrainingArguments' object has no attribute 'generation_config'\r\n```\r\n\r\nI found `generation_config` concept(?) created with https://github.com/huggingface/transformers/commit/5506d0496957cde19318eee3d34ee682b654abe8, which is I think the cause of this issue.\r\n\r\nSo one who suffers from with this issue, use `transformers<4.28`, which means before the first version applied above commit.\r\n\r\nAlso I suggest `transformers` project to check whether `generation_config` attribute even exists before checking it is `None` or not, if maintainers think it is required change to keep backward compatibility or to be a bit more safer code :)\r\n\r\nhttps://github.com/huggingface/transformers/blob/46d2468695a85dfcc2be0524caa912edefcf2391/src/transformers/trainer_seq2seq.py#L72\r\n\r\nOf course user may use `Seq2SeqTrainingArguments` to use with `Seq2SeqTrainer`, not `TrainingArguments`.\r\n",
"@cgbahk It don't believe it's necessary to downgrade the transformers version. If there's a model which was created before generation configs were introduced, then you can load and resave and the generation config will be created e.g. for MarianMT \r\n\r\n```python\r\nfrom transformers import MarianMTModel\r\n\r\nmodel = MarianMTModel.from_pretrained(my_checkpoint)\r\nmodel.save_pretrained(my_checkpoint)\r\n```\r\n\r\n@gante Is this correct? Are there any other considerations for the generation config? ",
"Oh, I didn't know `MarianMTModel` is of transformer builtin. In my case, `MASSIVETrainingArguments` is custom built class which don't know about `generation_config`.\r\n\r\nI don't fully understand, transformers internal, but https://github.com/huggingface/transformers/issues/22939#issuecomment-1550749034 resolved my case. Hopefully that resonates with any other who encounters same error :)",
"@cgbahk No worries - it's a big library! And commenting on what resolves issues is useful for everyone :) \r\n\r\nTo your first comment, yes, we certainly want to make sure things are backwards compatible. In this case, it seems that the docs aren't clear. It is recommended to use `Seq2SeqTrainingArguments` for `Seq2SeqTrainer`, however the `args` input type is listed as `TrainingArguments` [here](https://huggingface.co/docs/transformers/v4.29.1/en/main_classes/trainer#transformers.Seq2SeqTrainer). For `MASSIVETrainingArguments` - subclassing from `Seq2SeqTrainingArguments` should be enough to resolve the issue if doing seq2seq training. ",
"Came here to give pretty much the same reply as @amyeroberts just did :) \r\n\r\nAlso, we can't ensure retrocompatibility when classes get overwritten, it's impossible to anticipate all changes in advance π€ ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,687 | 1,687 |
NONE
| null |
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-swc")
def translate(text):
translated = model22.generate(**tokenizer(text, return_tensors="pt", padding=True).to("cpu"))
return [tokenizer.decode(t, skip_special_tokens=True) for t in translated][0]
Error when trying to deploy model on streamlit
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22939/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22939/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22938
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22938/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22938/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22938/events
|
https://github.com/huggingface/transformers/pull/22938
| 1,679,556,853 |
PR_kwDOCUB6oc5O6ozx
| 22,938 |
num_noise_spans should be <= num_items #22246
|
{
"login": "alexcpn",
"id": 1157251,
"node_id": "MDQ6VXNlcjExNTcyNTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1157251?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexcpn",
"html_url": "https://github.com/alexcpn",
"followers_url": "https://api.github.com/users/alexcpn/followers",
"following_url": "https://api.github.com/users/alexcpn/following{/other_user}",
"gists_url": "https://api.github.com/users/alexcpn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexcpn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexcpn/subscriptions",
"organizations_url": "https://api.github.com/users/alexcpn/orgs",
"repos_url": "https://api.github.com/users/alexcpn/repos",
"events_url": "https://api.github.com/users/alexcpn/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexcpn/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks- Done - Did `make style` and pushed the changes to this branch",
"Thanks @alexcpn - unfortunately the CI is still unhappy about the code style! Could you try rebasing onto main, run style fix, and then force pushing?\r\n```\r\ngit rebase upstream/main\r\nmake style\r\ngit push -f issue-22246\r\n```",
"I have rebased and checked and force-pushed. It is actually fine locally.\r\n\r\n```\r\ngit branch\r\n* issue-22246\r\n main\r\n$ make style\r\nblack examples tests src utils setup.py\r\nAll done! β¨ π° β¨\r\n2380 files left unchanged.\r\nruff examples tests src utils setup.py --fix\r\nmake: ruff: No such file or directory\r\nmake: *** [Makefile:69: style] Error 127\r\n```\r\nUnderstood the problem - I was having black 22.x and CircleCI is using 23.x - https://github.com/huggingface/transformers/pull/21480",
"@sanchit-gandhi CI is green",
"Awesome, nice find @alexcpn π Let's get a final review and get the PR merged!"
] | 1,682 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
# What does this PR do?
<!-- Remove if not applicable -->
Fixes #22246
When `mean_noise_span_length` is set to 1 there are cases (for example `noise_density=.55` when the `num_noise_spans` becomes greater than num_nonnoise_tokens
So the correction seems to be to consider also the `num_nonnoise_tokens` in calculation of `num_noise_spans`
num_noise_spans = int(np.round(min(num_noise_tokens,num_nonnoise_tokens) / self.mean_noise_span_length))
Demonstration of the buggy behaviour
https://gist.github.com/alexcpn/b9bb2b0f01833d1bb862502faf99bab8#file-t5_denoising-py
Demonstration of the possible correction
https://gist.github.com/alexcpn/b9bb2b0f01833d1bb862502faf99bab8#file-t5_denoising_corrected-py
## Who can review?
@sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22938/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22938/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22938",
"html_url": "https://github.com/huggingface/transformers/pull/22938",
"diff_url": "https://github.com/huggingface/transformers/pull/22938.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22938.patch",
"merged_at": 1683047251000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22937
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22937/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22937/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22937/events
|
https://github.com/huggingface/transformers/issues/22937
| 1,679,536,093 |
I_kwDOCUB6oc5kG6_d
| 22,937 |
Add return type hint to AutoModel.from_pretained
|
{
"login": "JosephSBoyle",
"id": 48555120,
"node_id": "MDQ6VXNlcjQ4NTU1MTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/48555120?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JosephSBoyle",
"html_url": "https://github.com/JosephSBoyle",
"followers_url": "https://api.github.com/users/JosephSBoyle/followers",
"following_url": "https://api.github.com/users/JosephSBoyle/following{/other_user}",
"gists_url": "https://api.github.com/users/JosephSBoyle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JosephSBoyle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JosephSBoyle/subscriptions",
"organizations_url": "https://api.github.com/users/JosephSBoyle/orgs",
"repos_url": "https://api.github.com/users/JosephSBoyle/repos",
"events_url": "https://api.github.com/users/JosephSBoyle/events{/privacy}",
"received_events_url": "https://api.github.com/users/JosephSBoyle/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @JosephSBoyle, thanks for raising this issue. \r\n\r\nCould you give some more information about the behaviour you expect and expand on \"we don't really know what it is without inspecting it at runtime...\" ? As I understand the issue, there's three different points being raised:\r\n* Knowing what the model \"is\"\r\n* How to easily modify the model's behaviour\r\n* IDE integration i.e. linters\r\n\r\nIs this correct? ",
"Hia @amyeroberts, apologies my writing in the original issue is a bit unclear.\r\n\r\nTo elaborate on my \"not knowing what it is\" statement, basically my point is that it's quite difficult to know the type of whatever model instance is returned when you call `from_pretrained`. The \"at runtime\" part of that statement was in reference to basically inspecting the returned instance e.g. in a breakpoint; which is what I ended up doing.\r\n\r\nI think that my points can best be summarized as: \"without the return type the `AutoModelForSequenceClassification.from_pretrained` method is difficult to use effectively.\"\r\n",
"The `AutoXxx.from_pretained(checkpoint)` API is essentially a factory method, loading the architecture / model specified by `checkpoint`. So, for AutoModelForSequenceClassification, any model which has a [sequence classification head](https://github.com/huggingface/transformers/blob/d6f1da6b7169e3b2bcc2fcdc91a19171ecafeb88/src/transformers/models/auto/modeling_auto.py#L641) can be returned. As such, there isn't a predefined type (other than being a subclass of `PreTrainedModel`). \r\n\r\nAfter loading a model, it's possible to check its class:\r\n```python\r\nIn [1]: from transformers import AutoModelForSequenceClassification\r\n\r\nIn [2]: model = AutoModelForSequenceClassification.from_pretrained(\"bert-base-uncased\")\r\n\r\nIn [3]: type(model)\r\nOut[3]: transformers.models.bert.modeling_bert.BertForSequenceClassification\r\n```\r\n\r\nSpecific model architectures can be loaded directly too: \r\n```python\r\nIn [1]: from transformers import BertForSequenceClassification\r\n\r\nIn [2]: model = BertForSequenceClassification.from_pretrained(\"bert-base-uncased\")\r\n\r\nIn [3]: type(model)\r\nOut[3]: transformers.models.bert.modeling_bert.BertForSequenceClassification\r\n```\r\n\r\nFrom the model config, it's possible to find which model architecture will be loaded e.g. [here](https://huggingface.co/bert-base-uncased/blob/0a6aa9128b6194f4f3c4db429b6cb4891cdb421b/config.json#L3) for `bert-base-uncased`. Note: for this checkpoint, all of the weights for the base model would be loaded in, the weights for the language modeling head discarded, and weights for the classification head randomly initialised. ",
"Mhmm, I understand that the return type varies based on the first arg. as you describe. Since we know that it's a subclass of `PreTrainedModel` I think we should at least add that in as the return type, something like this perhaps:\r\n\r\n```python\r\n# auto_factory.py\r\nfrom typing import TYPE_CHECKING\r\n\r\nif TYPE_CHECKING:\r\n from ...modeling_utils import PreTrainedModel\r\n\r\n...\r\n\r\ndef from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) -> type[PreTrainedModel]:\r\n ...\r\n```\r\n\r\n**Edit:** corrected the rtype to `type[PreTrainedModel]`, as the returned type will be a subclass of this type.\r\n\r\n---------------------------------------------------\r\n\r\n\r\nYour first cell is actually what I ended up doing to find the type of the returned instance, this is actually what I meant by my earlier runtime comment.",
"Except that this type hint does not help anyone understand the result, so is it really useful to bloat the code to add it?",
"Is it not better to know that the returned instance is a `PreTrainedModel` rather than literally `Any`? @sgugger ",
"Given the fact that the class is `AutoModel`, I don't think anyone will think in good faith this will return anything else than a model.",
"I think there's some misunderstanding here, nobody thinks that this model is returning anything other than a model.\r\n\r\nThe purpose of adding a type hint is to enable things like static analysis tools to work properly, which they can't do without knowledge of the return type.\r\n\r\n\r\n\r\n",
"I think there is some misunderstanding indeed. Transformers does not support any static analysis tool like Mypy and never will, as it would require us to add type annotations that bloat the code. In all our experiments this makes the code harder to read without ever catching any useful bug.\r\n\r\nWe only use type annotations when useful for the doc (in particular seeing the signature in an IDE with type annotations when an argument's type is not obvious) but that is all.",
"I didn't ask for MyPy support, just a single type hint. Static analysis includes things like linting which are what I'm talking about.\r\n\r\nFor a single type hint you get:\r\n\r\n- Attribute and method completion for attrs of `PreTrainedModel`\r\n- Linting, which can for instance, identify when any of the methods are called with the incorrect signature.\r\n- Better readability: programmers don't have to dig through the library to figure out that no matter what the first argument to `from_pretrained` is, they will recieve a subclass of the same class.\r\n\r\nMoreover, you reduce the cognitive burden on users who don't have the entire API of `PreTrainedModel` memorized to heart.",
"Except that it's not just a single type hint. The `from_pretrained` method in `auto_factory` can either return a `PreTrainedModel`, a `TFPreTrainedModel` or a `FlaxPreTrainedModel` depending on the class it was used with. If you find a way to have a simple type hint, we will of course merge such a PR, but I don't think it's easy to add.",
"Hi @sgugger, thank you for the explanation - I wasn't aware that there were multiple possible return types. "
] | 1,682 | 1,684 | 1,684 |
NONE
| null |
### Feature request
I think the ergonomics of using, e.g. `AutoModelForSequenceClassification.from_pretrained(...)` can be improved.
Consider the following example:
```python
model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased")
```
It's quite hard to reason about `model` since we don't really know _what_ it is without inspecting it at runtime...
More concretely, I wanted to freeze BERT's internal parameters but not those of the classifier layer.
### Motivation
The productivity of developers using the automodel API can be improved; code can be checked by linters more thoroughly etc.
### Your contribution
I'd like to open a PR; but am not sure what the best return type is for `AutoModelForSequenceClassification.from_pretrained`.
If we could discuss this here and reach some sort of consensus that this is desirable I will draft a pull request.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22937/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/22937/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22936
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22936/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22936/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22936/events
|
https://github.com/huggingface/transformers/pull/22936
| 1,679,504,647 |
PR_kwDOCUB6oc5O6ear
| 22,936 |
Avoid invalid escape sequences, use raw strings
|
{
"login": "Lingepumpe",
"id": 10073831,
"node_id": "MDQ6VXNlcjEwMDczODMx",
"avatar_url": "https://avatars.githubusercontent.com/u/10073831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lingepumpe",
"html_url": "https://github.com/Lingepumpe",
"followers_url": "https://api.github.com/users/Lingepumpe/followers",
"following_url": "https://api.github.com/users/Lingepumpe/following{/other_user}",
"gists_url": "https://api.github.com/users/Lingepumpe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lingepumpe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lingepumpe/subscriptions",
"organizations_url": "https://api.github.com/users/Lingepumpe/orgs",
"repos_url": "https://api.github.com/users/Lingepumpe/repos",
"events_url": "https://api.github.com/users/Lingepumpe/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lingepumpe/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Failures are unrelated to this PR and due to the last release of huggingface_hub. All is fixed on main so merging :-)",
"Rebased on top of current main branch to pass the tests.\r\n\r\n",
"> Rebased on top of current main branch to pass the tests.\r\n\r\nDoes not seem to help the tests, let me know if I should do anything else for this PR",
"The Hub is currently having high-response times due to some abusive traffic, which is why the tests are all red. I just forgot to push the merge button yesterday, so merging this as it shouldn't have any negative impact on main."
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes invalid escape sequences in strings, they are illegal in python, and will throw a SyntaxError if run with "-W error", and always throw a syntax error starting with python-3.12 (planned). With python between 3.6 and 3.11 and "-W default" they produce a DeprecationWarning:
```
> python -W error -c '"(.*?)-\d{5}-of-\d{5}"' ξ² β ξ² File "<string>", line 1
"(.*?)-\d{5}-of-\d{5}"
^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: invalid escape sequence '\d'
```
This has been fixed in the past, e.g. https://github.com/huggingface/transformers/pull/4924 - but missing linter support and the fact that python only fails this with "-W" flag set has let the issue be re-introduced. This PR fixes those occurrences and enables ruff "W605" error, which will prevent this for the future.
## Who can review?
Maybe
@sgugger who added ruff in the first place or (Why was "W605" disabled when switching to ruff?)
@patrickvonplaten wo recently introduced some invalid escape sequences
@LysandreJik who merged the linked fix from 2020
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22936/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22936/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22936",
"html_url": "https://github.com/huggingface/transformers/pull/22936",
"diff_url": "https://github.com/huggingface/transformers/pull/22936.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22936.patch",
"merged_at": 1682428676000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22935
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22935/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22935/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22935/events
|
https://github.com/huggingface/transformers/issues/22935
| 1,679,245,241 |
I_kwDOCUB6oc5kFz-5
| 22,935 |
[Doc] `add_special_tokens`'s documentation is ambigus
|
{
"login": "zplizzi",
"id": 5598968,
"node_id": "MDQ6VXNlcjU1OTg5Njg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5598968?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zplizzi",
"html_url": "https://github.com/zplizzi",
"followers_url": "https://api.github.com/users/zplizzi/followers",
"following_url": "https://api.github.com/users/zplizzi/following{/other_user}",
"gists_url": "https://api.github.com/users/zplizzi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zplizzi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zplizzi/subscriptions",
"organizations_url": "https://api.github.com/users/zplizzi/orgs",
"repos_url": "https://api.github.com/users/zplizzi/repos",
"events_url": "https://api.github.com/users/zplizzi/events{/privacy}",
"received_events_url": "https://api.github.com/users/zplizzi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"The `add_special_tokens`, when set to `True` is used to add special tokens at the beginning and at the end of the input sequence. In your case, since you are using a single input sequence, the tokenizer will add the special tokens `[CLS]` and `[SEP]` respectively at the beginning and at the end of the sentence. \r\n\r\nNote that not all tokenizers support adding special tokens. If a tokenizer does not support adding special tokens, setting `add_special_tokens` to `True` will have no effect.\r\n\r\nYou are using the \"**EleutherAI/pythia-70m**\" tokenizer which does not have a specific token for `[CLS]` and `[SEP]`. These tokens are represented by the `bos_token` and `eos_token`, respectively. Hence, the output you are seeing is correct and corresponds to the tokenized input sequence with the added special tokens.\r\n\r\nIf you want to add `[CLS]` and `[SEP]` tokens to your input sequence using this tokenizer, you can do so by explicitly specifying the token IDs for these tokens, like this:\r\n\r\n```python\r\ninput_ids = tok.encode(\"the dog walked\", add_special_tokens=False)\r\ninput_ids = [tok.bos_token_id] + input_ids + [tok.eos_token_id]\r\nattention_mask = [1] * len(input_ids)\r\noutput = {\"input_ids\": input_ids, \"attention_mask\": attention_mask}\r\nprint(output)\r\n```",
"Thanks for explaining. Can this behavior be added to the docs for the transformer tokenizer class? Nowhere on the [API docs](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizer) does it say that `add_special_tokens=True` will add the cls and sep tokens. One might naturally assume that BOS and EOS would be the natural ones to place before and after a sequence!",
"You can also define these tokens when initialising the model or after. `tokenizer.cls_token = \"[CLS]\"` should be working. I agree that the doc should be clearer. Thanks for reporting the confusion \r\n",
"I am waiting until the added tokens refactoring is finish to make sure this is fixed, and update the doc! "
] | 1,682 | 1,695 | 1,695 |
NONE
| null |
### System Info
- `transformers` version: 4.28.1
- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.31
- Python version: 3.9.5
- Huggingface_hub version: 0.13.2
- Safetensors version: not installed
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoTokenizer
tok = AutoTokenizer.from_pretrained("EleutherAI/pythia-70m")
print(tok.bos_token)
print(tok.eos_token)
print(tok.bos_token_id)
print(tok.eos_token_id)
print(tok("the dog walked", add_special_tokens=True))
```
outputs
```
<|endoftext|>
<|endoftext|>
0
0
{'input_ids': [783, 4370, 7428], 'attention_mask': [1, 1, 1]}
```
### Expected behavior
I expect it to output `[0, 783, 4370, 7428, 0]`. Or am I misunderstanding what `add_special_tokens` is supposed to do?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22935/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22935/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22934
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22934/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22934/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22934/events
|
https://github.com/huggingface/transformers/issues/22934
| 1,679,241,844 |
I_kwDOCUB6oc5kFzJ0
| 22,934 |
llm finetuning is overfitting?
|
{
"login": "paulcx",
"id": 738834,
"node_id": "MDQ6VXNlcjczODgzNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/738834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/paulcx",
"html_url": "https://github.com/paulcx",
"followers_url": "https://api.github.com/users/paulcx/followers",
"following_url": "https://api.github.com/users/paulcx/following{/other_user}",
"gists_url": "https://api.github.com/users/paulcx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/paulcx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/paulcx/subscriptions",
"organizations_url": "https://api.github.com/users/paulcx/orgs",
"repos_url": "https://api.github.com/users/paulcx/repos",
"events_url": "https://api.github.com/users/paulcx/events{/privacy}",
"received_events_url": "https://api.github.com/users/paulcx/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hard to really tell without specific dataset info, training procedure, and the model parameter count BUT:\r\n\r\nI can't speak for your other attempts but this picture doesn't seem unusual. The eval loss decreases until epoch=0.94 but increases at epoch=1.25 and onwards. That implies that training is good for one epoch. Depending on the size of the dataset, models can easily start overfitting after one finetuning epoch (since it's just repeating the data). I assume this is finetuning, not pretraining?\r\n\r\nFinetuning with adapters may work better.\r\n",
"Hi, @paulcx thanks for raising an issue! \r\n\r\nThis is probably a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports. If you suspect that the issue is coming from the library itself, could you follow the issue template and give more information about what is being run (environment and reproducible code snippet) so that we can best help you? ",
"> Hard to really tell without specific dataset info, training procedure, and the model parameter count BUT:\n> \n> \n> \n> I can't speak for your other attempts but this picture doesn't seem unusual. The eval loss decreases until epoch=0.94 but increases at epoch=1.25 and onwards. That implies that training is good for one epoch. Depending on the size of the dataset, models can easily start overfitting after one finetuning epoch (since it's just repeating the data). I assume this is finetuning, not pretraining?\n> \n> \n> \n> Finetuning with adapters may work better.\n> \n> \n\nThat's right. I'm trying finetuning. I knew pretraining and Lora finetuning works as expected. I just wonder if anyone have same issue. Does that mean one epoch is about overfitting? I saw a lot of open source projects and they finetuned 3 or 4 epoches with no explanation.",
"> That's right. I'm trying finetuning. I knew pretraining and Lora finetuning works as expected. I just wonder if anyone have same issue. Does that mean one epoch is about overfitting? I saw a lot of open source projects and they finetuned 3 or 4 epoches with no explanation.\r\n\r\nYes, one epoch seems to be enough for this run. Going any further would likely require hyperparameter tuning and/or a larger dataset. Some of my models also begin overfitting after one finetuning epoch (around ~900k samples in my dataset - I don't know how large your dataset is).\r\n\r\nOther projects may be using a different/larger dataset? Even if not, that's not too uncommon. They can finetune for a few more epochs than needed and then evaluate their checkpoints on a test set. The best performing checkpoint is then selected (which could be from a few epochs prior to the latest).",
"> > That's right. I'm trying finetuning. I knew pretraining and Lora finetuning works as expected. I just wonder if anyone have same issue. Does that mean one epoch is about overfitting? I saw a lot of open source projects and they finetuned 3 or 4 epoches with no explanation.\n> \n> \n> \n> Yes, one epoch seems to be enough for this run. Going any further would likely require hyperparameter tuning and/or a larger dataset. Some of my models also begin overfitting after one finetuning epoch (around ~900k samples in my dataset - I don't know how large your dataset is).\n> \n> \n> \n> Other projects may be using a different/larger dataset? Even if not, that's not too uncommon. They can finetune for a few more epochs than needed and then evaluate their checkpoints on a test set. The best performing checkpoint is then selected (which could be from a few epochs prior to the latest).\n\nmy dataset is only about 90K samples. one epoch 'theory' is quite interesting. It seems that people does not talk about this issue but ignoring overfitting.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,685 | 1,685 |
NONE
| null |
So far all my attempts, with different models (bloom, gpt), sizes, accelerate framework, datasets have led to one issue: the evaluation loss keeps increasing. plz see my log (deepspeed)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22934/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22934/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22933
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22933/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22933/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22933/events
|
https://github.com/huggingface/transformers/issues/22933
| 1,679,138,733 |
I_kwDOCUB6oc5kFZ-t
| 22,933 |
Flan-T5-small and T5-small have different number of layers?
|
{
"login": "taidnguyen",
"id": 16988147,
"node_id": "MDQ6VXNlcjE2OTg4MTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/16988147?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taidnguyen",
"html_url": "https://github.com/taidnguyen",
"followers_url": "https://api.github.com/users/taidnguyen/followers",
"following_url": "https://api.github.com/users/taidnguyen/following{/other_user}",
"gists_url": "https://api.github.com/users/taidnguyen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taidnguyen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taidnguyen/subscriptions",
"organizations_url": "https://api.github.com/users/taidnguyen/orgs",
"repos_url": "https://api.github.com/users/taidnguyen/repos",
"events_url": "https://api.github.com/users/taidnguyen/events{/privacy}",
"received_events_url": "https://api.github.com/users/taidnguyen/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @younesbelkada ",
"@sgugger Ah perhaps the authors inherited the flan-t5-small checkpoint from the improved `google/t5-v1_1-small` instead of `t5-small`. Its config file is a better match.",
"Hi @taidnguyen,\r\nThis is absolutely correct, \r\nAccording to the `t5x` repository, flan-t5 are derived from the `t5-v1_1` family as their config files refer to the config files of the `t5-v1_1` models. This information can be found on the original repository that hosts the original flan-t5 weights: https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints \r\nHope this helps!",
"@younesbelkada That helps - thank you! They documented this in their repo (as you show) but not in their paper, so definitely a surprise to me assuming that the inherited checkpoint of Flan-T5 is the original T5. I will close the issue."
] | 1,682 | 1,682 | 1,682 |
NONE
| null |
### System Info
Hi there, it appears that `google/flan-t5-small` and `t5-small` have different number of layers:
- Flan-T5 config: https://huggingface.co/google/flan-t5-small/blob/main/config.json
- T5 config: https://huggingface.co/t5-small/blob/main/config.json
I only find this inconsistency with *small*. The rest of the sizes (base/large/3b/11b) seem to match up for these two sets of models. I have not been able to find much information on this.
Is there a reason for Flan-T5-small to have more layers than its non instruction-tuned counterpart? I assume they should be equal. Thank you!
### Who can help?
CC: @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22933/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22932
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22932/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22932/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22932/events
|
https://github.com/huggingface/transformers/issues/22932
| 1,679,059,945 |
I_kwDOCUB6oc5kFGvp
| 22,932 |
LlamaTokenizer should follow signature of PreTrainedTokenizer
|
{
"login": "zplizzi",
"id": 5598968,
"node_id": "MDQ6VXNlcjU1OTg5Njg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5598968?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zplizzi",
"html_url": "https://github.com/zplizzi",
"followers_url": "https://api.github.com/users/zplizzi/followers",
"following_url": "https://api.github.com/users/zplizzi/following{/other_user}",
"gists_url": "https://api.github.com/users/zplizzi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zplizzi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zplizzi/subscriptions",
"organizations_url": "https://api.github.com/users/zplizzi/orgs",
"repos_url": "https://api.github.com/users/zplizzi/repos",
"events_url": "https://api.github.com/users/zplizzi/events{/privacy}",
"received_events_url": "https://api.github.com/users/zplizzi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The default `eos_token` and `bos_tokens` are there because the `sentence piece` model has these set, which means we are following the `llama` implementation. Having `add_eos` and `add_beo` gives the flexibility of enabling the addition, while not having to set the tokens to do so. \r\nThis might not fit your specific usage, but most of our tokenizer work that way! \r\nI am not sure I understand why it would break in your case, but you can easily set the `eos` and `bos` to `None`, the same goes with the `add_eos` and `add_bos` that you can set to `False`",
"This is even more confusing after I was told that the normal transformers tokenizers add CLS and SEP to sequences by default when `add_special_tokens=True`, but in this class, you're (optionally) instead adding BOS and EOS instead.\r\n\r\nSimply setting eos/bos to None works, but a user isn't expecting to have to do this to get behavior that's compatible with the base class. And tokenization bugs tend to be very subtle and take a lot of time to track down - this issue didn't crash my code, it just silently inserted an extra token (which caused havoc downstream). The point of inheritance is that a subclass should have the same public interface as the parent, so that a user just has to conform with the interface of the parent and can expect all subclasses to just work. This isn't the case here. ",
"Thanks for educating me on inheritance, I understand your use-case and how this can be confusing. The problem is that in order to keep the information of the content of these tokens, while not necessarily adding them prevents us from setting them to `None`. Indeed this breaks your code, but it also allows people to use the tokenizer in another way. They can decide whether to add or not the eos and bos depending on the usage. \r\n\r\nOverall it's a design choice and lots of other tokenizer don't respect this either. I am sorry that it broke your pipeline.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,687 | 1,687 |
NONE
| null |
### Feature request
PreTrainedTokenizer has a signature such that if the eos/bos tokens shouldn't be applied, they're set to None in the constructor. LlamaTokenizer (which is a subclass of PreTrainedTokenizer) instead always sets these, and adds a `add_eos/bos_token` field to enable/disable them. This breaks code that depended on the behavior of the base class in order to detect how to form sequences, eg when doing custom tokenizations using `tokenizer(stuff, add_special_tokens=False)` to build pieces of a sequence and then manually adding the EOS/BOS tokens.
cc @zphang @ArthurZucker from git blame
### Motivation
-
### Your contribution
-
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22932/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22931
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22931/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22931/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22931/events
|
https://github.com/huggingface/transformers/issues/22931
| 1,679,044,939 |
I_kwDOCUB6oc5kFDFL
| 22,931 |
Using decoder_input_ids with Seq2SeqTrainer.predict()
|
{
"login": "zhenduow",
"id": 30311623,
"node_id": "MDQ6VXNlcjMwMzExNjIz",
"avatar_url": "https://avatars.githubusercontent.com/u/30311623?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhenduow",
"html_url": "https://github.com/zhenduow",
"followers_url": "https://api.github.com/users/zhenduow/followers",
"following_url": "https://api.github.com/users/zhenduow/following{/other_user}",
"gists_url": "https://api.github.com/users/zhenduow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhenduow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhenduow/subscriptions",
"organizations_url": "https://api.github.com/users/zhenduow/orgs",
"repos_url": "https://api.github.com/users/zhenduow/repos",
"events_url": "https://api.github.com/users/zhenduow/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhenduow/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @sgugger \r\n",
"cc @gante ",
"Hey @zhenduow π \r\n\r\n[This PR](https://github.com/huggingface/transformers/pull/22772), which allows passing `decoder_input_ids` as part of the input to the `Seq2SeqTrainer`, was merged after the latest release (`v4.28`).\r\n\r\nCould you try installing from `main` (`pip install --upgrade git+https://github.com/huggingface/transformers.git`), and check whether it works correctly on your use case? :)",
"> Hey @zhenduow π\r\n> \r\n> [This PR](https://github.com/huggingface/transformers/pull/22772), which allows passing `decoder_input_ids` as part of the input to the `Seq2SeqTrainer`, was merged after the latest release (`v4.28`).\r\n> \r\n> Could you try installing from `main` (`pip install --upgrade git+https://github.com/huggingface/transformers.git`), and check whether it works correctly on your use case? :)\r\n\r\nHi @gante ,\r\n\r\nThank you very much for the reply! I have checked the PR and I have a further question.\r\nI pass the `decoder_input_ids` to `model.generate()` by `inputs['decoder_input_ids']` within `Seq2SeqTrainer`, is that right?\r\nBy doing this, I need to batch the `decoder_input_ids` to a tensor, which requires padding or truncating my `decoder_input_ids`. However, my generation task has various length of `decoder_input_ids`, which causes error when batching `decoder_input_ids` into a tensor. \r\nFor example, my `decoder_input_ids` looks like:\r\n[\r\n [1,2,3],\r\n [4,5],\r\n [6]\r\n]\r\nIt cannot create a tensor because the lengths of the three lists do not match. \r\nIs there a way to solve this problem? Thank you very much!",
"@zhenduow you probably need to pad `decoder_input_ids` -- see [this guide](https://huggingface.co/docs/transformers/main/en/pad_truncation)\r\n\r\nBTW, as per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other matters, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) π€",
"> @zhenduow you probably need to pad `decoder_input_ids` -- see [this guide](https://huggingface.co/docs/transformers/main/en/pad_truncation)\r\n> \r\n> BTW, as per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other matters, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) π€\r\n\r\nThank you! I should ask this in the forum.",
"> Hey @zhenduow π\r\n> \r\n> [This PR](https://github.com/huggingface/transformers/pull/22772), which allows passing `decoder_input_ids` as part of the input to the `Seq2SeqTrainer`, was merged after the latest release (`v4.28`).\r\n> \r\n> Could you try installing from `main` (`pip install --upgrade git+https://github.com/huggingface/transformers.git`), and check whether it works correctly on your use case? :)\r\n\r\n\r\n> @zhenduow you probably need to pad `decoder_input_ids` -- see [this guide](https://huggingface.co/docs/transformers/main/en/pad_truncation)\r\n> \r\n> BTW, as per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other matters, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) π€\r\n\r\nThank you! I solved the tensor problem with padding and got results. \r\nHowever, my results do not start with the `decoder_input_ids`. I want to double check in case this is a bug that: \r\nDo I need to pass any additional argument to `Seq2SeqTrainer` (which will tell the decoder to start with the given ids) besides adding `decoder_input_ids` as a key in the dataset dictionary? \r\n",
"Try passing `labels` and `decoder_input_ids`: if my memory is correct, the former will be used to obtain the evaluation metrics, and the later as the prompt for the decoder",
"> Try passing `labels` and `decoder_input_ids`: if my memory is correct, the former will be used to obtain the evaluation metrics, and the later as the prompt for the decoder\r\n\r\nThank you for the suggestion! \r\n\r\nI try to pass the `decoder_input_ids` to the forward function, but because I use trainer, I don't have control over the `model()` function. I only can add `decoder_input_ids` as a key in the model input dictionary. That does not seem to work. \r\n\r\nI dive into the code and find that there is this line of code in the `predict()` in `trainer.py`:\r\n\r\nhttps://github.com/huggingface/transformers/blob/15f260a82f98788354d55cb2788e9f0b5131fb77/src/transformers/trainer.py#LL3101C1-L3101C1\r\n\r\n`test_dataloader = self.get_test_dataloader(test_dataset)` \r\n\r\nThis line of code changes my `test_dataset['decoder_input_ids']` from my custom decoder prompts to shifted `labels`.\r\n\r\nCan you please check if this is intended or a bug? Why is this the case?",
"I was not sure of the behavior, it seems my memory was incorrect :) Alternatively, this one will work for sure: you can set `forced_decoder_ids` ([docs](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.GenerationConfig.forced_decoder_ids)), which will force the tokens you specify in the position you define. You can use it to force a starting sequence, assuming it is the same for all members of the batch.",
"Thanks! Can you please explain how I can use `forced_decoder_ids` with `trainer`? \r\nIt seems like I cannot call the `generate()` function anywhere, only the `model()` function. \r\nCan I use `forced_decoder_ids` with `model()`? ",
"@zhenduow you can define a generation config ([docs 1](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.GenerationConfig) [docs 2](https://huggingface.co/docs/transformers/main/en/generation_strategies#default-text-generation-configuration)) and pass it to the trainer (see [here](https://github.com/huggingface/transformers/blob/main/src/transformers/training_args_seq2seq.py#L47)). \r\n\r\nIf you parameterize `forced_decoder_ids` in the generation config, it will be passed to `.generate` at evaluation time",
"> @zhenduow you can define a generation config ([docs 1](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.GenerationConfig) [docs 2](https://huggingface.co/docs/transformers/main/en/generation_strategies#default-text-generation-configuration)) and pass it to the trainer (see [here](https://github.com/huggingface/transformers/blob/main/src/transformers/training_args_seq2seq.py#L47)).\r\n> \r\n> If you parameterize `forced_decoder_ids` in the generation config, it will be passed to `.generate` at evaluation time\r\n\r\nI did as you suggested and printed:\r\n`print(trainer.model.generation_config)`\r\n, which shows me that \r\n```\r\nGenerationConfig {\r\n \"_from_model_config\": true,\r\n \"decoder_start_token_id\": 0,\r\n \"eos_token_id\": 1,\r\n \"forced_decoder_ids\": [\r\n [\r\n 1,\r\n 123\r\n ]\r\n ],\r\n \"pad_token_id\": 0,\r\n \"transformers_version\": \"4.29.0.dev0\"\r\n}\r\n```\r\nThe [1,123] is for testing.\r\nHowever, the generation is still the same as before. Is there anything wrong here?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,686 | 1,686 |
NONE
| null |
Hi,
Is there a way to use `decoder_input_ids` in `Seq2SeqTrainer.predict()` as in `model.generate()`? The goal is to generate sentences with both the encoder input and decoder input to initialize the generation.
Thank you very much!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22931/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22931/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22930
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22930/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22930/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22930/events
|
https://github.com/huggingface/transformers/pull/22930
| 1,679,036,076 |
PR_kwDOCUB6oc5O460s
| 22,930 |
vilt_model
|
{
"login": "sushmanthreddy",
"id": 73489688,
"node_id": "MDQ6VXNlcjczNDg5Njg4",
"avatar_url": "https://avatars.githubusercontent.com/u/73489688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushmanthreddy",
"html_url": "https://github.com/sushmanthreddy",
"followers_url": "https://api.github.com/users/sushmanthreddy/followers",
"following_url": "https://api.github.com/users/sushmanthreddy/following{/other_user}",
"gists_url": "https://api.github.com/users/sushmanthreddy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushmanthreddy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushmanthreddy/subscriptions",
"organizations_url": "https://api.github.com/users/sushmanthreddy/orgs",
"repos_url": "https://api.github.com/users/sushmanthreddy/repos",
"events_url": "https://api.github.com/users/sushmanthreddy/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushmanthreddy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks!"
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
as per the issue #22561 ,model parallelism is implemented for vilt model.
@sgugger pls review it..is any changes is there pls let me knew
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22930/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22930",
"html_url": "https://github.com/huggingface/transformers/pull/22930",
"diff_url": "https://github.com/huggingface/transformers/pull/22930.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22930.patch",
"merged_at": 1682121686000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22929
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22929/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22929/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22929/events
|
https://github.com/huggingface/transformers/issues/22929
| 1,679,010,765 |
I_kwDOCUB6oc5kE6vN
| 22,929 |
SAM example code does not work
|
{
"login": "YubinXie",
"id": 16257776,
"node_id": "MDQ6VXNlcjE2MjU3Nzc2",
"avatar_url": "https://avatars.githubusercontent.com/u/16257776?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YubinXie",
"html_url": "https://github.com/YubinXie",
"followers_url": "https://api.github.com/users/YubinXie/followers",
"following_url": "https://api.github.com/users/YubinXie/following{/other_user}",
"gists_url": "https://api.github.com/users/YubinXie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YubinXie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YubinXie/subscriptions",
"organizations_url": "https://api.github.com/users/YubinXie/orgs",
"repos_url": "https://api.github.com/users/YubinXie/repos",
"events_url": "https://api.github.com/users/YubinXie/events{/privacy}",
"received_events_url": "https://api.github.com/users/YubinXie/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hello @YubinXie \r\nThanks for the issue! \r\nI did not managed to reproduce your issue with `torch==1.13.1`, and here is the snippet I used:\r\n```python\r\nfrom PIL import Image\r\nimport requests\r\nimport torch\r\n\r\nfrom transformers import AutoModel, AutoProcessor\r\n\r\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\r\n\r\nmodel = AutoModel.from_pretrained(\"facebook/sam-vit-base\").to(device)\r\nprocessor = AutoProcessor.from_pretrained(\"facebook/sam-vit-base\")\r\n\r\nimg_url = \"https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png\"\r\nraw_image = Image.open(requests.get(img_url, stream=True).raw).convert(\"RGB\")\r\ninput_points = [[[450, 600]]] # 2D location of a window in the image\r\n\r\ninputs = processor(raw_image, input_points=input_points, return_tensors=\"pt\").to(device)\r\nwith torch.no_grad():\r\n outputs = model(**inputs)\r\n```\r\nI can see that you are using `torch==1.5.x`. Note that `transformers` has a minimum required version of `1.9` for `torch`: https://github.com/huggingface/transformers/blob/main/setup.py#L180 - hence I have tried to run that script with `torch==1.9.1` and did not encountered the issue. I strongly recommend you to install a greater version of `torch` (i.e. use at least the version `1.9`). Could you try to update `torch` and let us know if you still face the issue?",
"> Hello @YubinXie Thanks for the issue! I did not managed to reproduce your issue with `torch==1.13.1`, and here is the snippet I used:\r\n> \r\n> ```python\r\n> from PIL import Image\r\n> import requests\r\n> import torch\r\n> \r\n> from transformers import AutoModel, AutoProcessor\r\n> \r\n> device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\r\n> \r\n> model = AutoModel.from_pretrained(\"facebook/sam-vit-base\").to(device)\r\n> processor = AutoProcessor.from_pretrained(\"facebook/sam-vit-base\")\r\n> \r\n> img_url = \"https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png\"\r\n> raw_image = Image.open(requests.get(img_url, stream=True).raw).convert(\"RGB\")\r\n> input_points = [[[450, 600]]] # 2D location of a window in the image\r\n> \r\n> inputs = processor(raw_image, input_points=input_points, return_tensors=\"pt\").to(device)\r\n> with torch.no_grad():\r\n> outputs = model(**inputs)\r\n> ```\r\n> \r\n> I can see that you are using `torch==1.5.x`. Note that `transformers` has a minimum required version of `1.9` for `torch`: https://github.com/huggingface/transformers/blob/main/setup.py#L180 - hence I have tried to run that script with `torch==1.9.1` and did not encountered the issue. I strongly recommend you to install a greater version of `torch` (i.e. use at least the version `1.9`). Could you try to update `torch` and let us know if you still face the issue?\r\n\r\nHi @younesbelkada Thank you for your response. I updated my torch and now the model works!\r\nHowever, I got another error the the post process:\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-6-abdc2d7068b8> in <module>\r\n 6 outputs = model(**inputs)\r\n 7 \r\n----> 8 masks = processor.image_processor.post_process_masks(\r\n 9 outputs.pred_masks.cpu(), inputs[\"original_sizes\"].cpu(), inputs[\"reshaped_input_sizes\"].cpu()\r\n 10 )\r\n\r\n~/miniconda3/envs/pytorch/lib/python3.8/site-packages/transformers/models/sam/image_processing_sam.py in post_process_masks(self, masks, original_sizes, reshaped_input_sizes, mask_threshold, binarize, pad_size)\r\n 404 interpolated_mask = F.interpolate(masks[i], target_image_size, mode=\"bilinear\", align_corners=False)\r\n 405 interpolated_mask = interpolated_mask[..., : reshaped_input_sizes[i][0], : reshaped_input_sizes[i][1]]\r\n--> 406 interpolated_mask = F.interpolate(interpolated_mask, original_size, mode=\"bilinear\", align_corners=False)\r\n 407 if binarize:\r\n 408 interpolated_mask = interpolated_mask > mask_threshold\r\n\r\n~/miniconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/functional.py in interpolate(input, size, scale_factor, mode, align_corners, recompute_scale_factor, antialias)\r\n 3957 if antialias:\r\n 3958 return torch._C._nn._upsample_bilinear2d_aa(input, output_size, align_corners, scale_factors)\r\n-> 3959 return torch._C._nn.upsample_bilinear2d(input, output_size, align_corners, scale_factors)\r\n 3960 if input.dim() == 5 and mode == \"trilinear\":\r\n 3961 assert align_corners is not None\r\n\r\nTypeError: upsample_bilinear2d() received an invalid combination of arguments - got (Tensor, list, bool, NoneType), but expected one of:\r\n * (Tensor input, tuple of ints output_size, bool align_corners, tuple of floats scale_factors)\r\n didn't match because some of the arguments have invalid types: (Tensor, list of [Tensor, Tensor], bool, NoneType)\r\n * (Tensor input, tuple of ints output_size, bool align_corners, float scales_h, float scales_w, *, Tensor out)\r\n```\r\n\r\nThe code is from hugging face SAM page. I wonder if it is code issue or, other package issue.\r\n",
"Hi @YubinXie \r\nThanks for iterating, it seems that this is a duplicate of https://github.com/huggingface/transformers/issues/22904 \r\nCould you try to uninstall `transformers` and re-install it from source? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,685 | 1,685 |
NONE
| null |
### System Info
- `transformers` version: 4.29.0.dev0
- Platform: Linux-3.10.0-957.12.2.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.3
- Huggingface_hub version: 0.13.4
- Safetensors version: not installed
- PyTorch version (GPU?): 1.5.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
img_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png"
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")
input_points = [[[450, 600]]] # 2D location of a window in the image
inputs = processor(raw_image, input_points=input_points, return_tensors="pt").to(device)
outputs = model(**inputs)
masks = processor.image_processor.post_process_masks(
outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu()
)
scores = outputs.iou_scores
### Expected behavior
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-5-abdc2d7068b8> in <module>
4
5 inputs = processor(raw_image, input_points=input_points, return_tensors="pt").to(device)
----> 6 outputs = model(**inputs)
7
8 masks = processor.image_processor.post_process_masks(
~/miniconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
~/miniconda3/envs/pytorch/lib/python3.8/site-packages/transformers/models/sam/modeling_sam.py in forward(self, pixel_values, input_points, input_labels, input_boxes, input_masks, image_embeddings, multimask_output, output_attentions, output_hidden_states, return_dict, **kwargs)
1331 )
1332
-> 1333 sparse_embeddings, dense_embeddings = self.prompt_encoder(
1334 input_points=input_points,
1335 input_labels=input_labels,
~/miniconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
~/miniconda3/envs/pytorch/lib/python3.8/site-packages/transformers/models/sam/modeling_sam.py in forward(self, input_points, input_labels, input_boxes, input_masks)
669 if input_labels is None:
670 raise ValueError("If points are provided, labels must also be provided.")
--> 671 point_embeddings = self._embed_points(input_points, input_labels, pad=(input_boxes is None))
672 sparse_embeddings = torch.empty((batch_size, point_batch_size, 0, self.hidden_size), device=target_device)
673 sparse_embeddings = torch.cat([sparse_embeddings, point_embeddings], dim=2)
~/miniconda3/envs/pytorch/lib/python3.8/site-packages/transformers/models/sam/modeling_sam.py in _embed_points(self, points, labels, pad)
619 padding_point = torch.zeros(target_point_shape, device=points.device)
620 padding_label = -torch.ones(target_labels_shape, device=labels.device)
--> 621 points = torch.cat([points, padding_point], dim=2)
622 labels = torch.cat([labels, padding_label], dim=2)
623 input_shape = (self.input_image_size, self.input_image_size)
RuntimeError: Expected object of scalar type double but got scalar type float for sequence element 1.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22929/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22928
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22928/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22928/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22928/events
|
https://github.com/huggingface/transformers/pull/22928
| 1,678,997,134 |
PR_kwDOCUB6oc5O4ya9
| 22,928 |
Update tiny models and a few fixes
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The result of `Check Tiny Models / Check tiny models (push)` could be ignored."
] | 1,682 | 1,682 | 1,682 |
COLLABORATOR
| null |
# What does this PR do?
- Update tiny models, including:
- Sam
- BigCode
- (recent) GPTNeoXForSequenceClassification
- Fix wrong condition introduced in my PR #22774 (it doesn't break things, but it will affect the creation of `pipeline_to_model_mapping` for new model types)
- Fix import in `test_pipelines_mask_generation.py`
2 fixes need @ArthurZucker .
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22928/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22928",
"html_url": "https://github.com/huggingface/transformers/pull/22928",
"diff_url": "https://github.com/huggingface/transformers/pull/22928.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22928.patch",
"merged_at": 1682340323000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22926
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22926/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22926/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22926/events
|
https://github.com/huggingface/transformers/pull/22926
| 1,678,755,741 |
PR_kwDOCUB6oc5O3-nM
| 22,926 |
add pref_train_gpu_one.mdx
|
{
"login": "Baelish03",
"id": 97971495,
"node_id": "U_kgDOBdbtJw",
"avatar_url": "https://avatars.githubusercontent.com/u/97971495?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Baelish03",
"html_url": "https://github.com/Baelish03",
"followers_url": "https://api.github.com/users/Baelish03/followers",
"following_url": "https://api.github.com/users/Baelish03/following{/other_user}",
"gists_url": "https://api.github.com/users/Baelish03/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Baelish03/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Baelish03/subscriptions",
"organizations_url": "https://api.github.com/users/Baelish03/orgs",
"repos_url": "https://api.github.com/users/Baelish03/repos",
"events_url": "https://api.github.com/users/Baelish03/events{/privacy}",
"received_events_url": "https://api.github.com/users/Baelish03/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22926). All of your documentation changes will be reflected on that endpoint."
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
See issue #17459
Good evening.
I didn't translate technical terms and I preferred to keep them in english. So I hope it's all ok.
I had some problems in previous pull requests; if it fails again, please can you say how to resolve?
Good bye.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22926/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22926",
"html_url": "https://github.com/huggingface/transformers/pull/22926",
"diff_url": "https://github.com/huggingface/transformers/pull/22926.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22926.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22925
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22925/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22925/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22925/events
|
https://github.com/huggingface/transformers/issues/22925
| 1,678,739,059 |
I_kwDOCUB6oc5kD4Zz
| 22,925 |
mxmax/Chinese_Chat_T5_Base 樑εζδΉη¨torch.jit.traceθΏ½θΈͺ
|
{
"login": "ling976",
"id": 63000347,
"node_id": "MDQ6VXNlcjYzMDAwMzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/63000347?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ling976",
"html_url": "https://github.com/ling976",
"followers_url": "https://api.github.com/users/ling976/followers",
"following_url": "https://api.github.com/users/ling976/following{/other_user}",
"gists_url": "https://api.github.com/users/ling976/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ling976/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ling976/subscriptions",
"organizations_url": "https://api.github.com/users/ling976/orgs",
"repos_url": "https://api.github.com/users/ling976/repos",
"events_url": "https://api.github.com/users/ling976/events{/privacy}",
"received_events_url": "https://api.github.com/users/ling976/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @ling976, thanks for raising this issue! \r\n\r\nUnfortunately, I don't speak Chinese :/ , is it possible to share the issue description in english? \r\n\r\nCould you also follow the issue template and share information such that this can be reproduced, including: \r\n* The running environment: run `transformers-cli env` in the terminal and copy-paste the output\r\n* The line in the code the error is triggered on - is it the model save? \r\n* The checkpoint or architecture being run? ",
"transformers ηζ¬ζ―4.22.1\r\nζη°ε¨ιθ¦ε°.binζ ΌεΌη樑εζ仢转ζ’ζ.ptζ ΌεΌ.\r\nη°ε¨εΎε°δΈδΈͺιθ――δΏ‘ζ―\r\n\r\n File \"D:\\Program Files\\Python310\\lib\\site-packages\\torch\\jit\\_trace.py\", line 976, in trace_module\r\n module._c._create_method_from_trace(\r\n RuntimeError: Tracer cannot infer type of Seq2SeqLMOutput(loss=None, logits=tensor([[[-8.0331, -0.6127, 1.7029, ..., -6.0205, -4.9355, -7.5521]]],",
"@ling976 Please try passing `torchscript=True` as an argument when loading the model i.e `model = AutoModelForSeq2SeqLM.from_pretrained('./outputs/model_files/', torchscript=True)`\r\n\r\n",
"ζε δΈtorchscript=Trueεζ₯δΊδΈδΈͺζ°ηιθ――δΏ‘ζ―\r\n \r\n D:\\Program Files\\Python310\\lib\\site-packages\\transformers\\modeling_utils.py:701: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if causal_mask.shape[1] < attention_mask.shape[1]:",
"@ling976 [As mentioned above](https://github.com/huggingface/transformers/issues/22925#issuecomment-1518062046), could you please follow the issue template and the necessary information such that we can replicate the issue? ",
"θΏδΈͺε·²η»ε―δ»₯δΊ,ει’ηθ―θΏθ¦εθδΈζ―ιθ――,ζ―ζηιδΊ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,685 | 1,685 |
NONE
| null |
ζδ½Ώη¨δΈι’η代η θΏθ‘樑ε转ζ’
tokenizer = AutoTokenizer.from_pretrained('./outputs/model_files/')
model = AutoModelForSeq2SeqLM.from_pretrained('./outputs/model_files/')
device = torch.device("cpu")
model.to(device)
model.eval()
tokenized_dict = tokenizer(
["please answer the following question: what is the boiling point of nitrogen",], ["-320.4F",],
return_tensors="pt"
)
input_tuple = (tokenized_dict['input_ids'], tokenized_dict['attention_mask'], torch.Tensor([[2]]).long())
traced_model = torch.jit.trace(model, input_tuple)
traced_model.save("./model.pt")
δ½ζ―εΎε°θΏζ ·δΈε ιθ――δΏ‘ζ―
D:\Program Files\Python310\lib\site-packages\transformers\modeling_utils.py:701: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if causal_mask.shape[1] < attention_mask.shape[1]:
Traceback (most recent call last):
File "E:\Python\project\Chinese_Chat_T5_Base-main\convertModel.py", line 25, in <module>
traced_model = torch.jit.trace(model, input_tuple)
File "D:\Program Files\Python310\lib\site-packages\torch\jit\_trace.py", line 759, in trace
return trace_module(
File "D:\Program Files\Python310\lib\site-packages\torch\jit\_trace.py", line 976, in trace_module
module._c._create_method_from_trace(
RuntimeError: Tracer cannot infer type of Seq2SeqLMOutput(loss=None, logits=tensor([[[-10.4197, 6.3242, 8.7392, ..., -10.0839, -7.8809, -8.4109]]],
grad_fn=<UnsafeViewBackward0>), past_key_values=((tensor([[[[-9.3662e-02, -2.6494e-01, 2.7725e-01, 3.5019e-01, 5.3944e-01,
-2.6313e-01, -5.9071e-01, 5.1579e-01, -5.2901e-01, -5.9420e-01,
-9.2730e-02, 1.2436e-03, -8.6124e-01, -1.4801e-01, -6.9207e-01,
......
[ 2.7600e-02, -2.4005e-02, -7.1618e-02, ..., 1.9455e-01,
1.0591e-02, -8.1877e-02],
[ 5.6630e-02, -2.8372e-03, 3.5540e-02, ..., 1.0443e-01,
3.7175e-02, -5.7037e-02],
[-5.6965e-04, 1.0548e-04, 9.4504e-04, ..., -1.7588e-04,
8.6722e-04, -8.3949e-04]]], grad_fn=<MulBackward0>), encoder_hidden_states=None, encoder_attentions=None)
:Dictionary inputs to traced functions must have consistent type. Found Tensor and Tuple[Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor]]
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22925/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22925/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22924
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22924/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22924/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22924/events
|
https://github.com/huggingface/transformers/pull/22924
| 1,678,729,240 |
PR_kwDOCUB6oc5O345m
| 22,924 |
add perf_train_gpu_one.mdx
|
{
"login": "Baelish03",
"id": 97971495,
"node_id": "U_kgDOBdbtJw",
"avatar_url": "https://avatars.githubusercontent.com/u/97971495?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Baelish03",
"html_url": "https://github.com/Baelish03",
"followers_url": "https://api.github.com/users/Baelish03/followers",
"following_url": "https://api.github.com/users/Baelish03/following{/other_user}",
"gists_url": "https://api.github.com/users/Baelish03/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Baelish03/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Baelish03/subscriptions",
"organizations_url": "https://api.github.com/users/Baelish03/orgs",
"repos_url": "https://api.github.com/users/Baelish03/repos",
"events_url": "https://api.github.com/users/Baelish03/events{/privacy}",
"received_events_url": "https://api.github.com/users/Baelish03/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
See issue #17459
Good evening.
I didn't translate technical terms and I preferred to keep them in english
Good bye.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22924/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22924/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22924",
"html_url": "https://github.com/huggingface/transformers/pull/22924",
"diff_url": "https://github.com/huggingface/transformers/pull/22924.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22924.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22923
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22923/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22923/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22923/events
|
https://github.com/huggingface/transformers/issues/22923
| 1,678,618,615 |
I_kwDOCUB6oc5kDa_3
| 22,923 |
Need support for Sentence Similarity Pipeline
|
{
"login": "timxieICN",
"id": 112183115,
"node_id": "U_kgDOBq_HSw",
"avatar_url": "https://avatars.githubusercontent.com/u/112183115?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/timxieICN",
"html_url": "https://github.com/timxieICN",
"followers_url": "https://api.github.com/users/timxieICN/followers",
"following_url": "https://api.github.com/users/timxieICN/following{/other_user}",
"gists_url": "https://api.github.com/users/timxieICN/gists{/gist_id}",
"starred_url": "https://api.github.com/users/timxieICN/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timxieICN/subscriptions",
"organizations_url": "https://api.github.com/users/timxieICN/orgs",
"repos_url": "https://api.github.com/users/timxieICN/repos",
"events_url": "https://api.github.com/users/timxieICN/events{/privacy}",
"received_events_url": "https://api.github.com/users/timxieICN/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] |
open
| false | null |
[] |
[
"cc @Narsil ",
"Hi @timxieICN ,\r\n\r\nThanks for the suggestion.\r\nIn general, sentence-similarity like https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2 are served by `SentenceTransformers` which is a library on top of `transformers` itself.\r\n\r\nhttps://huggingface.co/sentence-transformers\r\n\r\nSentence transformers adds a few configuration specifically on how to do similarity with a given model as there's several ways to do it.\r\n\r\nFrom a user point of view it should be relatively easy to do this:\r\n\r\n```python\r\nfrom sentence_transformers import SentenceTransformer, util\r\n\r\nmodel = SentenceTransformer(\r\n model_id\r\n)\r\n\r\nembeddings1 = model.encode(\r\n inputs[\"source_sentence\"], convert_to_tensor=True\r\n)\r\nembeddings2 = model.encode(inputs[\"sentences\"], convert_to_tensor=True)\r\nsimilarities = util.pytorch_cos_sim(embeddings1, embeddings2)\r\n```\r\n\r\n\r\nThis is exactly the code that is actually running to calculate those on the hub currently: https://github.com/huggingface/api-inference-community/blob/main/docker_images/sentence_transformers/app/pipelines/sentence_similarity.py\r\n\r\nAdding this directly in `transformers` would basically mean incorporating `sentence-transformers` within `transformers` and I'm not sure it's something desired. Maybe @amyeroberts or another core maintainer can confirm/infirm this.\r\n\r\nDoes this help ?\r\n\r\n",
"We definitely don't want a circular dependency like that! \r\n\r\nAs the example you shared @Narsil is so simple, I think it's a good replacement for a pipeline. Let's leave this issue open and if there's a lot of interest or new use case we can consider other possible options. ",
"> Hi @timxieICN ,\r\n> \r\n> Thanks for the suggestion. In general, sentence-similarity like https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2 are served by `SentenceTransformers` which is a library on top of `transformers` itself.\r\n> \r\n> https://huggingface.co/sentence-transformers\r\n> \r\n> Sentence transformers adds a few configuration specifically on how to do similarity with a given model as there's several ways to do it.\r\n> \r\n> From a user point of view it should be relatively easy to do this:\r\n> \r\n> ```python\r\n> from sentence_transformers import SentenceTransformer, util\r\n> \r\n> model = SentenceTransformer(\r\n> model_id\r\n> )\r\n> \r\n> embeddings1 = model.encode(\r\n> inputs[\"source_sentence\"], convert_to_tensor=True\r\n> )\r\n> embeddings2 = model.encode(inputs[\"sentences\"], convert_to_tensor=True)\r\n> similarities = util.pytorch_cos_sim(embeddings1, embeddings2)\r\n> ```\r\n> \r\n> This is exactly the code that is actually running to calculate those on the hub currently: https://github.com/huggingface/api-inference-community/blob/main/docker_images/sentence_transformers/app/pipelines/sentence_similarity.py\r\n> \r\n> Adding this directly in `transformers` would basically mean incorporating `sentence-transformers` within `transformers` and I'm not sure it's something desired. Maybe @amyeroberts or another core maintainer can confirm/infirm this.\r\n> \r\n> Does this help ?\r\n\r\nHi @Narsil, this is api of sentence transformer, I want to use sentence similarity of T5 model. So how to do that? \r\n\r\nThank you",
"I think that measuring distance between elements provided, by any embedding generation model, would be desirable indeed, I'm open to try and help if you want to do that."
] | 1,682 | 1,700 | null |
NONE
| null |
### Feature request
HuggingFace now has a lot of Sentence Similarity models, but the pipeline does not yet support this: https://huggingface.co/docs/transformers/main_classes/pipelines
### Motivation
HuggingFace now has a lot of Sentence Similarity models, but the pipeline does not yet support this: https://huggingface.co/docs/transformers/main_classes/pipelines
### Your contribution
I can write a PR, but might need some one else's help.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22923/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22923/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/22922
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22922/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22922/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22922/events
|
https://github.com/huggingface/transformers/pull/22922
| 1,678,615,134 |
PR_kwDOCUB6oc5O3gbR
| 22,922 |
[CI] clap patch fusion test values
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
COLLABORATOR
| null |
# What does this PR do?
Fixes CI on clap cc @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22922/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22922/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22922",
"html_url": "https://github.com/huggingface/transformers/pull/22922",
"diff_url": "https://github.com/huggingface/transformers/pull/22922.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22922.patch",
"merged_at": 1682090527000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22921
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22921/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22921/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22921/events
|
https://github.com/huggingface/transformers/pull/22921
| 1,678,586,235 |
PR_kwDOCUB6oc5O3aFI
| 22,921 |
Bring back PartialState DeepSpeed
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hello Zach, after further deep dive, I found that we need to use DeepSpeed utils for initializing distributed setup in Accelerate's Partial State as done in the above-linked PR. This should solve the issues with the DeepSpeed tests.",
"Thank you for the fix! Confirmed it works\r\n\r\nhttps://github.com/huggingface/transformers/actions/runs/4815347341/jobs/8573997198"
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR brings back the DeepSpeed implementation. After thorough help and investigation with @pacman100 we've determined the cause of the test failures is an issue on the DeepSpeed side, and an issue will be opened to track this. As a result, to maintain tests passing this PR should not be merged until after it is completed
Said issue: https://github.com/microsoft/DeepSpeed/issues/3341
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger @pacman100
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22921/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22921",
"html_url": "https://github.com/huggingface/transformers/pull/22921",
"diff_url": "https://github.com/huggingface/transformers/pull/22921.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22921.patch",
"merged_at": 1682537760000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22920
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22920/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22920/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22920/events
|
https://github.com/huggingface/transformers/pull/22920
| 1,678,539,869 |
PR_kwDOCUB6oc5O3P-q
| 22,920 |
Small sam patch
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22920). All of your documentation changes will be reflected on that endpoint."
] | 1,682 | 1,682 | 1,682 |
COLLABORATOR
| null |
# What does this PR do?
Fixes #22904 .
It is backward compatible and prevents having to modify any of the notebooks we shared
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22920/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22920",
"html_url": "https://github.com/huggingface/transformers/pull/22920",
"diff_url": "https://github.com/huggingface/transformers/pull/22920.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22920.patch",
"merged_at": 1682106078000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22919
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22919/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22919/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22919/events
|
https://github.com/huggingface/transformers/pull/22919
| 1,678,509,472 |
PR_kwDOCUB6oc5O3JbZ
| 22,919 |
Fix: Seq2SeqTrainingArgs overriding to_dict for GenerationConfig json support
|
{
"login": "Natooz",
"id": 56734983,
"node_id": "MDQ6VXNlcjU2NzM0OTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/56734983?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Natooz",
"html_url": "https://github.com/Natooz",
"followers_url": "https://api.github.com/users/Natooz/followers",
"following_url": "https://api.github.com/users/Natooz/following{/other_user}",
"gists_url": "https://api.github.com/users/Natooz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Natooz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Natooz/subscriptions",
"organizations_url": "https://api.github.com/users/Natooz/orgs",
"repos_url": "https://api.github.com/users/Natooz/repos",
"events_url": "https://api.github.com/users/Natooz/events{/privacy}",
"received_events_url": "https://api.github.com/users/Natooz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Just updated",
"Thanks for adding this @Natooz π "
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
`Seq2SeqTrainingArguments` override the `to_dict()` method from `TrainingArguments`.
This is a fix to #22831 (solution 2), solving an error that happened when saving to json a `Seq2SeqTrainingArguments` object with a `GenerationConfig` attribute.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? #22831
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger @gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22919/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22919",
"html_url": "https://github.com/huggingface/transformers/pull/22919",
"diff_url": "https://github.com/huggingface/transformers/pull/22919.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22919.patch",
"merged_at": 1682085205000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22918
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22918/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22918/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22918/events
|
https://github.com/huggingface/transformers/pull/22918
| 1,678,429,740 |
PR_kwDOCUB6oc5O24U1
| 22,918 |
Add an attribute to disable custom kernels in deformable detr in order to make the model ONNX exportable
|
{
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@fxmarty Following up on this, I agree with @sgugger's suggestion and think that a config argument would be a better alternative. ",
"Thank you will update!",
"@amyeroberts Let me know if this is better!"
] | 1,682 | 1,682 | 1,682 |
COLLABORATOR
| null |
As per title and reported in https://github.com/huggingface/transformers/issues/22330 and https://github.com/huggingface/optimum/pull/931
This option will allow us to patch the model on the fly during the export to avoid going into the try/catch logic that is not supported by PyTorch ONNX export.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22918/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22918/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22918",
"html_url": "https://github.com/huggingface/transformers/pull/22918",
"diff_url": "https://github.com/huggingface/transformers/pull/22918.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22918.patch",
"merged_at": 1682342824000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22917
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22917/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22917/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22917/events
|
https://github.com/huggingface/transformers/pull/22917
| 1,678,391,398 |
PR_kwDOCUB6oc5O2v-x
| 22,917 |
Place static llama variables for multigpu
|
{
"login": "xloem",
"id": 279585,
"node_id": "MDQ6VXNlcjI3OTU4NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/279585?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xloem",
"html_url": "https://github.com/xloem",
"followers_url": "https://api.github.com/users/xloem/followers",
"following_url": "https://api.github.com/users/xloem/following{/other_user}",
"gists_url": "https://api.github.com/users/xloem/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xloem/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xloem/subscriptions",
"organizations_url": "https://api.github.com/users/xloem/orgs",
"repos_url": "https://api.github.com/users/xloem/repos",
"events_url": "https://api.github.com/users/xloem/events{/privacy}",
"received_events_url": "https://api.github.com/users/xloem/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Not really in favor of this, if the problem is with the way accelerate handles a for loop, should be solved in accelerate. cc @sgugger ",
"I am imagining accelerate might be able to store a cache of variables by object id so as to not repeatedly transfer the same one. When to empty such a cache is unclear to me.",
"_The documentation is not available anymore as the PR was closed or merged._",
"This is already done by Accelerate behind the scenes, so there is no need for this PR.",
"@sgugger Accelerate moves the weights prior to the model forward function. Since `attention_mask` and `position_ids` (unlike `hidden_states`) are never returned back from a forward function, it moves them again and again for every layer.",
"I'm not sure the actual time you lose for that is worth changing the code in Transformers however.",
"Iβm on a system with iommu=soft where data transfer is very slow. I wanted to provide numbers for the speed change on my system, which was significant enough that I opened this PR, before closing it out. However, I am busy for a day or two.\r\n\r\nRegardless it is clear that you would prefer a solution be found for accelerate than transformers. I opened this after seeing the various PP commits adding similar code, although they are addressing a more serious issue.\r\n\r\nIβll come back to this to add my numbers or feel free to close it out for now.",
"Running llama 65b with [software iommu](https://github.com/pytorch/pytorch/issues/1637#issuecomment-338268158), this change drops my inference time from 25.11 s/token to 19.45 s/token, which is 22.5% of the inference delay. Thoughts?",
"I discussed it more internally with other core maintainers and we decided this falls into specific-hardware optimizations that we don't accept to avoid bloating the code of the models. You can still use your changes locally and share them with others via our code in the Hub API though.",
"Thanks for your consideration and ideas of other approaches."
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
When using accelerate, attention_mask and position_ids were being retransferred for every layer after the first device. This change transfers them once in advance.
## Who can review?
@ArthurZucker @pacman100
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22917/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22917",
"html_url": "https://github.com/huggingface/transformers/pull/22917",
"diff_url": "https://github.com/huggingface/transformers/pull/22917.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22917.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22916
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22916/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22916/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22916/events
|
https://github.com/huggingface/transformers/pull/22916
| 1,678,290,686 |
PR_kwDOCUB6oc5O2aHO
| 22,916 |
Add inputs_embeds functionality when generating with GPT-Neox
|
{
"login": "TobiasLee",
"id": 20009381,
"node_id": "MDQ6VXNlcjIwMDA5Mzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/20009381?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TobiasLee",
"html_url": "https://github.com/TobiasLee",
"followers_url": "https://api.github.com/users/TobiasLee/followers",
"following_url": "https://api.github.com/users/TobiasLee/following{/other_user}",
"gists_url": "https://api.github.com/users/TobiasLee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TobiasLee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TobiasLee/subscriptions",
"organizations_url": "https://api.github.com/users/TobiasLee/orgs",
"repos_url": "https://api.github.com/users/TobiasLee/repos",
"events_url": "https://api.github.com/users/TobiasLee/events{/privacy}",
"received_events_url": "https://api.github.com/users/TobiasLee/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
This PR extends https://github.com/huggingface/transformers/pull/21405 and #21889 by @gante to GPTNeox models (which also includes the recent Pythia Suite models), , making it accept inputs_embeds when generating.
## Who can Review?
@gante @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22916/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22916/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22916",
"html_url": "https://github.com/huggingface/transformers/pull/22916",
"diff_url": "https://github.com/huggingface/transformers/pull/22916.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22916.patch",
"merged_at": 1682077889000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22915
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22915/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22915/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22915/events
|
https://github.com/huggingface/transformers/pull/22915
| 1,678,131,422 |
PR_kwDOCUB6oc5O14Ge
| 22,915 |
Make sam ONNX exportable
|
{
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ArthurZucker The PR is here: https://github.com/huggingface/optimum/pull/995"
] | 1,682 | 1,682 | 1,682 |
COLLABORATOR
| null |
As per title, would be great to have this PR on the next release so that we can support the ONNX export (see https://github.com/huggingface/optimum/pull/995). This piece is the only blocking one.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22915/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22915",
"html_url": "https://github.com/huggingface/transformers/pull/22915",
"diff_url": "https://github.com/huggingface/transformers/pull/22915.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22915.patch",
"merged_at": 1682085271000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22914
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22914/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22914/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22914/events
|
https://github.com/huggingface/transformers/issues/22914
| 1,678,110,624 |
I_kwDOCUB6oc5kBe-g
| 22,914 |
beam_sample throws a nan error on long generations
|
{
"login": "fpgaminer",
"id": 1585817,
"node_id": "MDQ6VXNlcjE1ODU4MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1585817?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fpgaminer",
"html_url": "https://github.com/fpgaminer",
"followers_url": "https://api.github.com/users/fpgaminer/followers",
"following_url": "https://api.github.com/users/fpgaminer/following{/other_user}",
"gists_url": "https://api.github.com/users/fpgaminer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fpgaminer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fpgaminer/subscriptions",
"organizations_url": "https://api.github.com/users/fpgaminer/orgs",
"repos_url": "https://api.github.com/users/fpgaminer/repos",
"events_url": "https://api.github.com/users/fpgaminer/events{/privacy}",
"received_events_url": "https://api.github.com/users/fpgaminer/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey @fpgaminer π \r\n\r\nMy first recommendation would be to use \"normal\" `sample`, perhaps with a slightly lower temperature. If you think about it, `beam_sample` is a sample-based strategy that greedily picks the best scores among the drawn sequences, which is similar to `sample` with a lower temperature (which also favors high-scoring tokens). `sample` is also faster (no beam-related operations), and subject to much more maintenance :)\r\n\r\nIf you still want to use `beam_sample`, my recommendation would be to add the `remove_invalid_values` flag ([docs](https://huggingface.co/docs/transformers/v4.28.1/en/main_classes/text_generation#transformers.GenerationConfig.remove_invalid_values)).",
"Hello @gante,\r\n\r\nThanks for the response. I have no intention of using beam sampling myself. I'm bubbling up a bug report by @diegomontoya from my GPTQ-triton repo, that turned out to just be a bug in `transformers` itself. It was a curious enough bug that I got nerd-sniped by it...\r\n\r\n> If you still want to use beam_sample, my recommendation would be to add the remove_invalid_values flag ([docs](https://huggingface.co/docs/transformers/v4.28.1/en/main_classes/text_generation#transformers.GenerationConfig.remove_invalid_values)).\r\n\r\nI don't think that would work. The bug results from `beam_scores` exploding, which drives all the scores down to `-inf`. Invalid tokens are removed in the `logits_processor` pass, before `beam_scores` is added. Even if it were applied after, it would just set _all_ tokens to `max` which I think would cause softmax->multinomial to just throw anyway.\r\n\r\n------\r\n\r\nI've looked at the code more, and read up on beam search more. I think my initial take is correct. I see no reason to feed the beam_scores to the logit processors. It's a scalar value added to all the logits/probs, so what effect could it possibly have? Temperature, for example, is completely unaffected as proven like so:\r\n\r\n```\r\nSuppose we have a vector `x`\r\nSoftmax is `e**x / sum(e**x)`\r\n\r\nSuppose we add a scalar `b`: `x + b`\r\nSoftmax is now: `e**(x + b) / sum(e**(x + b))`\r\nExponential law: `e**x * e**b / sum(e**x * e**b)`\r\nSimplify: `e**x * e**b / (sum(e**x) * e**b)`\r\nSimplify: `e**x / sum(e**x)`\r\nQ.E.D.\r\n```\r\nIt's possible that `b`, aka the beam score, has an effect on other logit processors, but I can't fathom what effect one would _want_ it to have on things like top p, top k, typical, etc. I'd have to go through each in more detail to have a stronger opinion here. It just feels wrong, since I think all those logit processors were introduced in the context of greedy sampling. They weren't designed to take a global scalar like beam score into account.\r\n\r\nSo I argue that `beam_sample` should be modified to _not_ include the `beam_scores` when calling `logits_warper`, and when doing multinomial sampling. It should be added after the tokens have been sampled.\r\n\r\n-------\r\n\r\nI also think there is other oddness to the way `beam_sample` samples. Consider the simplified forms of `sample` vs `beam_sample`:\r\n\r\nsample:\r\n```\r\nnext_token_logits = outputs.logits[:, -1, :]\r\nnext_token_scores = logits_processor(input_ids, next_token_logits)\r\nnext_token_scores = logits_warper(input_ids, next_token_scores)\r\nprobs = nn.functional.softmax(next_token_scores, dim=-1)\r\nnext_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)\r\n```\r\n\r\nbeam_sample:\r\n```\r\nnext_token_logits = outputs.logits[:, -1, :]\r\nnext_token_scores = log_softmax(next_token_logits, dim=-1)\r\nnext_token_scores_processed = logits_processor(input_ids, next_token_scores)\r\nnext_token_scores = next_token_scores_processed + beam_scores[:, None].expand_as(next_token_scores)\r\nnext_token_scores = logits_warper(input_ids, next_token_scores)\r\nprobs = nn.functional.softmax(next_token_scores, dim=-1)\r\nnext_tokens = torch.multinomial(probs, num_samples=2 * num_beams)\r\n... beam search stuff ...\r\n```\r\n\r\nWhy does `beam_sample` apply a `log_softmax` to the logits before feeding them to `logits_processor` when the sample method doesn't? That seems odd, especially when all the logit processors are expecting, well, logits, not the log softmax of logits.\r\n\r\nThe same goes for `logits_warper`, which also applies a sequence of LogitProcessors. They aren't likely to be expecting log softmaxed values.\r\n\r\nAnd then `softmax` gets applied afterwards to values in the log softmax domain... very confusing.\r\n\r\n----\r\n\r\nSo I propose for beam_sample (simplified/pseudo):\r\n\r\n```\r\nnext_token_logits = outputs.logits[:, -1, :]\r\nnext_token_scores = logits_processor(input_ids, next_token_logits)\r\nnext_token_scores = logits_warper(input_ids, next_token_scores)\r\nprobs = nn.functional.softmax(next_token_scores, dim=-1)\r\nnext_tokens = torch.multinomial(probs, num_samples=2 * num_beams)\r\n... gather tokens, scores ...\r\n... add beam_scores to respective scores ...\r\n... beam processing ...\r\n```\r\n\r\n---\r\n\r\n> If you think about it, beam_sample is a sample-based strategy that greedily picks the best scores among the drawn sequences, which is similar to sample with a lower temperature (which also favors high-scoring tokens). sample is also faster (no beam-related operations), and subject to much more maintenance :)\r\n\r\nMy quick take: sure, maybe. But in theory beam search and beam sampling still provide potential value over low temp sampling. They can explore the landscape more thoroughly and potentially find more globally optimal sequences that a greedy sampling method usually won't. I dunno.\r\n\r\nI'm personally in the \"better logit processors\" and \"better models\" camp than futzing with beam search. But since HF includes beam sampling, might as well make it work as well as possible?",
"@gante I am not qualified to comment on the internal code itself so I will only report from a user level perspective:\r\n\r\n1. Adding `remove_invalid_values=True` does not resolve the issue. I am still getting the exact same nan/inf exceptions with num_beams = 2 on input+output (expected) total token values > 256. I added it to both generate_config and directly to generate() method and it still threw exceptions. Am I using it correctly?\r\n\r\n```probability tensor contains either `inf`, `nan` or element < 0```\r\n\r\n2. Having read the naive concepts of beam search and also huggingface's own interpretations of the beam search, I don't understand why user have to care about a `remove_invalid_values` toggle. Isn't it implied that generate wrapper, which most user and external libs use, should auto remove and bypass any invalid values during gen stages? This add another chicken and egg problem, if we don't add `remove_invalid_values`, only a runtime generate will find out that inf/nan tokens are generated and then we apply a `remove_invalid_values` pass which negates any performance. As result, as an end-user, I will always set `remove_invalid_values` with `num_beams` >1, but if the both options are symbiotic, they should be done internally by the library and not exposed to user. \r\n\r\n3. I am using beam search because I believe it may resolve an issue that is outlined by the beam search principle. I can lower the the temperature but that requires that:\r\n\r\n* I can detect my result from higher temperature is wrong, very difficult for my problem set. \r\n* Even if I can detect error due to higher temp, I need re-run pass in lower temp which is basically beams in operation. \r\n* Not possible to predetermine whether lower/higher temp result in better answer. In my test case use of beam-search. I am relying on the idea that `num_beams=2` select two paths, and only until the end, compare the prob score of the result and give me the best one. ",
"@fpgaminer @diegomontoya Let me split my comment in three: `remove_invalid_values`, how beam sample is implemented, and a suggestion based on @diegomontoya 3rd point in the last comment :)\r\n\r\n___________________________________________________________________________________________\r\n\r\n`remove_invalid_values` was created to avoid errors with extreme numbers, as a last resort. When it needs to be used, it means that there is something unstable in the process. I was double-checking it and it is missing the `-inf` case, which is probably why it didn't immediately solve your case (I'll open a PR). However, it should still be avoided, and the cases where you actually need it are very very uncommon.\r\n\r\n> Isn't it implied that generate wrapper, which most user and external libs use, should auto remove and bypass any invalid values during gen stages?\r\n\r\nDefinitely not. Our guiding principles for building blocks like `.generate()`, sorted by priority, are 1. keep retrocompatibility (unless it is to fix bugs) and 2. build a default behavior that works in most cases and minimizes black-box behavior. Having `remove_invalid_values` on by default would go against 2 -- if there is something wrong in the generation strategy, we'd rather show it up to the user.\r\n___________________________________________________________________________________________\r\n\r\nThe same discussion and arguments you wrote about `beam_sample` were also written in the past, by myself included :) (a few examples: [1](https://github.com/huggingface/transformers/pull/5420#discussion_r449779867) [2](https://github.com/huggingface/transformers/pull/21341#discussion_r1089223478)). \r\n\r\nTL;DR: I agree with your point of view, but a) `beam_sample` is not an official implementation so the order of operations is not right or wrong, it is a matter of taste of its creator b) because of the principles I wrote above, ensuring retrocompatibility > individual opinion. \r\n\r\nOur codebase is fully open, so feel free to monkey patch on your end any different perspective π€ And my apologies for the nerd snipe, beam methods are indeed a strong magnet!\r\n\r\n__________________________________________________________________________________________\r\n\r\n@diegomontoya if beam sample keeps failing after I add the `-inf` case and monkey patching is not an option, try the following:\r\n1. Use `sample`\r\n2. Set `num_return_sequences` to an integer, which will make `generate` return these many sequences per input\r\n3. Set `output_scores` and `return_dict_in_generate` to `True`, so you have access to the scores\r\n4. Pick the output with the highest score ([this function may help](https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationMixin.compute_transition_scores))\r\n\r\nThis is essentially a poor man's version of beam sample. While beam sample greedily optimizes the score in the intermediary steps, this will retain full randomness.\r\n\r\n___________________________________________________________________________________________\r\n\r\nI hope this (long) comment helps understanding why we make certain decisions, even if you don't agree with them :)\r\n\r\n",
"@gante Thank you. Got much more info than I had hoped in return and not only did it clarify it for me but your poor-man's beam really opened up my mind about how I should properly use and approach my future usage of generate as a whole. ",
"btw, the error you've seen is very likely related to this one: https://github.com/huggingface/transformers/issues/22979\r\n\r\nTL;DR -- pytorch's sampling function is buggy atm, being able to pick tokens with 0 probability π ",
"Just adding that it could be CUDA, bitsandbytes and pytorch related.\r\n\r\nThe same error happens for me as well on `torch==1.13.1` with model call:\r\n`tokens = model.generate(**inputs, max_new_tokens=500, do_sample=True, temperature=0.9, streamer=streamer)`\r\n\r\nThis call does not throw the error, but returns gibberish:\r\n`tokens = model.generate(**inputs, max_new_tokens=25, do_sample=True, num_beams=1, temperature=0.9, streamer=streamer, remove_invalid_values=True)`\r\nreturns for example:\r\n`ovΓ‘Bit}\")VAjem ubuntuη±³ alwaysicago connectingselection Rewrite perceMillBLoll Forschavano economic pygindi Pent ΓΆss fs file`\r\n\r\nFor me the issue happens on my multi gpu ubuntu 22.04 system with CUDA 12.0 (python detects 11.8 interestingly).\r\nIt does not happen on my single gpu ubuntu 20.04 system with CUDA 11.6.\r\n\r\nAlso, this only happens when I load the model in 8-bit with `bitsandbytes`. Loading the model without `load_in_8bit=True` is very slow (5-10 seconds per token), but returns text that makes sense and does not throw any error.\r\n\r\nFurther testing shows that after downgrading from CUDA 11.8 to CUDA 11.6, I no longer receive this error when using `load_in_8bit=True` and `tokens = model.generate(**inputs, max_new_tokens=25, do_sample=True, temperature=0.9, streamer=streamer)`. However, I still get gibberish results:\r\n`ΡΠΎΠΊ hastICEyk char sunnyε° hardwareington chi GraphSecondsesserεΌ conser conformygieneOriuvimplughtub`.\r\nThe winning combo for 'no error and words that make sense' seems to be either:\r\n- CUDA 11.6, `load_in_8bit=True` and a single GPU system.\r\n- or CUDA 11.6, `load_in_8bit=False` and a multi GPU system.\r\n\r\n**Update: ** it's not pytorch related, happens for both 2.0.1 and 1.13.1. See https://github.com/huggingface/transformers/issues/23989",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.29.0.dev0
- Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.35
- Python version: 3.10.10
- Huggingface_hub version: 0.13.4
- Safetensors version: 0.3.0
- PyTorch version (GPU?): 2.0.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
It seems that `beam_sample` throws a NaN exception when generating long sequences. Specifically the call `next_tokens = torch.multinomial(probs, num_samples=2 * num_beams)`. Example generate call that causes the bug:
```
output_sequences = model.generate(
input_ids=encoded_prompt,
max_length=512 + len(encoded_prompt[0]),
temperature=0.7,
num_return_sequences=1,
num_beams=2,
do_sample=True,
)
```
Reliably throws a NaN on my system and @diegomontoya 's system. In my testing this occurs when the requested number of new tokens is roughly >=256. In the example above I use 512 just to be sure.
Based on the debugging I've done so far, what's happening is `beam_scores` increases exponentially with each iteration of the inner beam search loop. It does this until it reaches a very large negative number, causing `next_token_scores` to contain all `-inf`, which causes `probs` to be all `nan` and then `multinomial` throws.
As for why this occurs, a rough summary of the inner loop elucidates:
```
while
next_token_scores = ...
next_token_scores = next_token_scores + beam_scores
next_token_scores = logits_warper(..., next_token_scores)
beam_scores = beam_scorer.process(..., beam_scores, next_token_scores)
```
Specifically, beam_scores feeds back into itself with every iteration. If the inner loop was additive only, this would be fine, and `beam_scores` would increase linearly with length. But this is not the case. `logits_warper` makes the loop non-additive. In the example above it behaves as approximately multiplying `next_token_scores` by 1.5. Hence `beam_scores` goes exponential and the function eventually throws.
I don't know enough about how `beam_sample` is meant to function to analyze further. It does seem odd to me, though, that the sampling is dependent on the current beam score. Since the beam score is a scalar value, it affects the probabilities of all tokens equally, so ... it shouldn't have any effect at all? So why apply it to the sampling logic? It seems more reasonable to me, and would indeed fix this bug, if it were added after sampling and before handing the scores off to the BeamScorer for processing.
### Expected behavior
`generate` shouldn't throw a `nan` error under reasonable circumstances.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22914/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22913
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22913/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22913/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22913/events
|
https://github.com/huggingface/transformers/pull/22913
| 1,678,092,909 |
PR_kwDOCUB6oc5O1vy9
| 22,913 |
Fix counting in Slack report for some jobs
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
COLLABORATOR
| null |
# What does this PR do?
Fix counting in Slack report for some jobs.
### Context
For the additional jobs (i.e. not model testing jobs), the number of failed tests are being summed across over all machine types (single/multi - gpu). We have strange things like single gpu deepspeed CI has only 1 failure but 86 was shown on the report, see:
<img width="812" alt="image" src="https://user-images.githubusercontent.com/2521628/233582320-81c3b61d-add4-4ad5-b007-2522ad0f44b3.png">
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22913/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22913/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22913",
"html_url": "https://github.com/huggingface/transformers/pull/22913",
"diff_url": "https://github.com/huggingface/transformers/pull/22913.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22913.patch",
"merged_at": 1682068943000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22912
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22912/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22912/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22912/events
|
https://github.com/huggingface/transformers/pull/22912
| 1,678,078,291 |
PR_kwDOCUB6oc5O1skP
| 22,912 |
Top 100
|
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,684 | 1,684 |
MEMBER
| null |
This PR celebrates the upcoming 100k stars for `transformers` by highlighting 100 open-source repositories that use or have integrated `transformers` in their projects.
This list should not be limited to 100 (which we use as a mirror to the 100k stars), so we're looking forward to having libraries that integrate `transformers` open PRs against this document.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22912/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 5,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22912/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22912",
"html_url": "https://github.com/huggingface/transformers/pull/22912",
"diff_url": "https://github.com/huggingface/transformers/pull/22912.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22912.patch",
"merged_at": 1684334815000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22911
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22911/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22911/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22911/events
|
https://github.com/huggingface/transformers/pull/22911
| 1,678,072,768 |
PR_kwDOCUB6oc5O1rVX
| 22,911 |
Skip a failing test on main for now
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Merge as it just skip the failing test."
] | 1,682 | 1,682 | 1,682 |
COLLABORATOR
| null |
# What does this PR do?
On CircleCI, we have a failure on `main`:
```
FAILED tests/models/roberta/test_modeling_roberta.py::RobertaModelTest::test_assisted_greedy_search_matches_greedy_search
```
(BTW, it works on daily CI GPU runners)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22911/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22911/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22911",
"html_url": "https://github.com/huggingface/transformers/pull/22911",
"diff_url": "https://github.com/huggingface/transformers/pull/22911.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22911.patch",
"merged_at": 1682065375000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22910
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22910/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22910/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22910/events
|
https://github.com/huggingface/transformers/pull/22910
| 1,677,961,122 |
PR_kwDOCUB6oc5O1S4Y
| 22,910 |
Expose AutoModelForMaskGeneration
|
{
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Either way is fine for me - it's just to have consistency between the transformers Hub metadata and transformers. An either fix would be to just change the metadata, and remove the `AutoModelForMaskGeneration`.",
"So which one do you want @ArthurZucker ? I'm fine either way.",
"Let's go with `AutoModelForMaskGeneration` π "
] | 1,682 | 1,682 | 1,682 |
COLLABORATOR
| null |
As per title, with this PR `from transformers import AutoModelForMaskGeneration` works.
An alternative could be to remove `AutoModelForMaskGeneration` (as `AutoModel` already does the job), but currently for sam Hub metadata it is `AutoModelForMaskGeneration` that is used and not `AutoModel`: https://huggingface.co/datasets/huggingface/transformers-metadata/blob/main/pipeline_tags.json#L576
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22910/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22910",
"html_url": "https://github.com/huggingface/transformers/pull/22910",
"diff_url": "https://github.com/huggingface/transformers/pull/22910.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22910.patch",
"merged_at": 1682085885000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22909
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22909/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22909/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22909/events
|
https://github.com/huggingface/transformers/pull/22909
| 1,677,928,201 |
PR_kwDOCUB6oc5O1Lpj
| 22,909 |
Moved labels to enable parallelism pipeline in Luke model
|
{
"login": "sushmanthreddy",
"id": 73489688,
"node_id": "MDQ6VXNlcjczNDg5Njg4",
"avatar_url": "https://avatars.githubusercontent.com/u/73489688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushmanthreddy",
"html_url": "https://github.com/sushmanthreddy",
"followers_url": "https://api.github.com/users/sushmanthreddy/followers",
"following_url": "https://api.github.com/users/sushmanthreddy/following{/other_user}",
"gists_url": "https://api.github.com/users/sushmanthreddy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushmanthreddy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushmanthreddy/subscriptions",
"organizations_url": "https://api.github.com/users/sushmanthreddy/orgs",
"repos_url": "https://api.github.com/users/sushmanthreddy/repos",
"events_url": "https://api.github.com/users/sushmanthreddy/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushmanthreddy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
As suggested in the [#22561](https://github.com/huggingface/transformers/issues/22561),moved labels to the same device as logits for the luke model.
@sgugger can u pls review this pr ,there is mistake in the [#22907](https://github.com/huggingface/transformers/pull/22907),I have made necessary changes..
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22909/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22909",
"html_url": "https://github.com/huggingface/transformers/pull/22909",
"diff_url": "https://github.com/huggingface/transformers/pull/22909.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22909.patch",
"merged_at": 1682068755000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22908
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22908/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22908/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22908/events
|
https://github.com/huggingface/transformers/pull/22908
| 1,677,893,816 |
PR_kwDOCUB6oc5O1EIo
| 22,908 |
added GPTNeoForTokenClassification
|
{
"login": "peter-sk",
"id": 6168908,
"node_id": "MDQ6VXNlcjYxNjg5MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6168908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peter-sk",
"html_url": "https://github.com/peter-sk",
"followers_url": "https://api.github.com/users/peter-sk/followers",
"following_url": "https://api.github.com/users/peter-sk/following{/other_user}",
"gists_url": "https://api.github.com/users/peter-sk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/peter-sk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peter-sk/subscriptions",
"organizations_url": "https://api.github.com/users/peter-sk/orgs",
"repos_url": "https://api.github.com/users/peter-sk/repos",
"events_url": "https://api.github.com/users/peter-sk/events{/privacy}",
"received_events_url": "https://api.github.com/users/peter-sk/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey! Could you make sure the CI tests are green? Can review then! ",
"@ArthurZucker\r\nSure. I'm getting the hang of it. Now, the only failing tests are connected to flax and seem unrelated to this pull request.",
"If the flax errors are not due to the PR, this is ready to be reviewed, @ArthurZucker and @younesbelkada :-)",
"I just checked the logs for the remaining errors one more time. The errors are related to the import of the optax library, where jax.Array is used in a type. Apparently there is no name \"Array\" in the top-level namespace of the jax module.\r\n\r\nI cannot see how this could be related to my PR.",
"The jax version used in the examples_flax test is 0.3.6:\r\nCollecting jax!=0.3.2,<=0.3.6,>=0.2.8 (from transformers==4.28.0.dev0)\r\n Using cached jax-0.3.6-py3-none-any.whl\r\nThis version clearly has no Array class.\r\nI am unsure why such an old version should be used?",
"Figured out that optax <= 0.1.4 is needed. And found out that upstream/main has that change already π Now everything should be cleared for review.",
"Definitely ready for review, @ArthurZucker and @younesbelkada :-)\r\n",
"Cool! Reviewing now",
"All done and ready to be merged, @ArthurZucker and @younesbelkada π ",
"I implemented the same change as for GPTNeoXForTokenClassification, i.e., I removed the hasattr etc. and just use config.classifier_dropout directly.",
"@sgugger Ready to merge when the checks complete. Thanks for the fast action π\r\n\r\n... and more to come in the next weeks!"
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
It adds the class GPTNeoForTokenClassification, which allows using GPT Neo models for token classification tasks. The implementation follows the one for other models (such as GPT2) closely and simply adds a linear layer after the hidden states.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
@younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22908/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22908/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22908",
"html_url": "https://github.com/huggingface/transformers/pull/22908",
"diff_url": "https://github.com/huggingface/transformers/pull/22908.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22908.patch",
"merged_at": 1682611803000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22907
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22907/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22907/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22907/events
|
https://github.com/huggingface/transformers/pull/22907
| 1,677,706,276 |
PR_kwDOCUB6oc5O0cZi
| 22,907 |
Moved labels to enable parallelism pipeline in Luke model
|
{
"login": "katiele47",
"id": 54815905,
"node_id": "MDQ6VXNlcjU0ODE1OTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/54815905?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/katiele47",
"html_url": "https://github.com/katiele47",
"followers_url": "https://api.github.com/users/katiele47/followers",
"following_url": "https://api.github.com/users/katiele47/following{/other_user}",
"gists_url": "https://api.github.com/users/katiele47/gists{/gist_id}",
"starred_url": "https://api.github.com/users/katiele47/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/katiele47/subscriptions",
"organizations_url": "https://api.github.com/users/katiele47/orgs",
"repos_url": "https://api.github.com/users/katiele47/repos",
"events_url": "https://api.github.com/users/katiele47/events{/privacy}",
"received_events_url": "https://api.github.com/users/katiele47/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22907). All of your documentation changes will be reflected on that endpoint.",
"@katiele47 - thanks for the PR! \r\n\r\nApologies, I reviewed another PR implementing the changes for Luke quickly this morning without properly reading the issue and realising this PR was also open. As the PR #22909 notes, there was just a small change that needed to happen on L2232, updating `logits` -> `reshaped_logits`. Otherwise the PR all looked good and after updating would have been merged :) \r\n\r\nI'm sorry for my mistake - I hope this doesn't discourage you from contributing and we welcome any PRs that you'd like to open in the future. \r\n\r\n@sushmanthreddy Anyone in the community is able to review PRs. If you spot something in the code that needs updating, could you comment directly on the PR instead of opening another one? ",
"@amyeroberts . I don't see, where I have gone wrong. I am new to open source and actually have worked on this issue before the @katiele47 did, but I haven't just mentioned a reviewer to review.\r\nthis is the proof for that [link](https://github.com/huggingface/transformers/pull/22900/files), actually, I have made the same changes needed just due to branch conflicts that haven't kept the proper pr.\r\nanyways sorry for that if my mistake is there, I hope u understand ",
"Hi @amyeroberts thanks for spotting the small change and no worries! Now that it has been fixed by @sushmanthreddy PR #22909 should I close this PR? ",
"@katiele47 Yes, this PR can now be closed. Thanks again for opening - we look forward to future contributions!\r\n\r\n@sushmanthreddy You haven't made a mistake, don't worry :) It's just a request to make it easier for maintainers to keep track of issues and PRs in the codebase. "
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #22561
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Please let me know if there's anything I need to correct! Thanks
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22907/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22907/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22907",
"html_url": "https://github.com/huggingface/transformers/pull/22907",
"diff_url": "https://github.com/huggingface/transformers/pull/22907.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22907.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22906
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22906/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22906/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22906/events
|
https://github.com/huggingface/transformers/pull/22906
| 1,677,672,282 |
PR_kwDOCUB6oc5O0VOt
| 22,906 |
Fix a minor bug in CI slack report
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
COLLABORATOR
| null |
# What does this PR do?
#22798 added code to show the difference between 2 CI runs. However, the previous CI run(s) may not yet produced the artifact `test_failure_tables`, and we got `KeyError: 'model_failures_report.txt'` in the last run.
This PR adds some check.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22906/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22906",
"html_url": "https://github.com/huggingface/transformers/pull/22906",
"diff_url": "https://github.com/huggingface/transformers/pull/22906.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22906.patch",
"merged_at": 1682102196000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22905
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22905/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22905/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22905/events
|
https://github.com/huggingface/transformers/pull/22905
| 1,677,619,281 |
PR_kwDOCUB6oc5O0J9n
| 22,905 |
JukeBox Model Parallelism by moving labels to same devices for logits
|
{
"login": "AdiaWu",
"id": 60185619,
"node_id": "MDQ6VXNlcjYwMTg1NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/60185619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AdiaWu",
"html_url": "https://github.com/AdiaWu",
"followers_url": "https://api.github.com/users/AdiaWu/followers",
"following_url": "https://api.github.com/users/AdiaWu/following{/other_user}",
"gists_url": "https://api.github.com/users/AdiaWu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AdiaWu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AdiaWu/subscriptions",
"organizations_url": "https://api.github.com/users/AdiaWu/orgs",
"repos_url": "https://api.github.com/users/AdiaWu/repos",
"events_url": "https://api.github.com/users/AdiaWu/events{/privacy}",
"received_events_url": "https://api.github.com/users/AdiaWu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22905). All of your documentation changes will be reflected on that endpoint.",
"It seems there is actually nothing to do for this model.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,685 | 1,685 |
CONTRIBUTOR
| null |
# What does this PR do?
This is a draft PR that moves labels to same devices as logits for accomplishing model parallelism for JukeBox model. Since `src/transformers/models/jukebox/modeling_jukebox.py` does not contain conditional statements where label is not None, I would like to ask you for helps how I can implement features of moving labels to same device as logits for accomplishing model parallelism as mentioned in Issue 22561.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #22561
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger @younesbelkada
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22905/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22905",
"html_url": "https://github.com/huggingface/transformers/pull/22905",
"diff_url": "https://github.com/huggingface/transformers/pull/22905.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22905.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22904
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22904/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22904/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22904/events
|
https://github.com/huggingface/transformers/issues/22904
| 1,677,526,973 |
I_kwDOCUB6oc5j_Qe9
| 22,904 |
SAM: Notebook example not working
|
{
"login": "antoinemacia",
"id": 18508791,
"node_id": "MDQ6VXNlcjE4NTA4Nzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/18508791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/antoinemacia",
"html_url": "https://github.com/antoinemacia",
"followers_url": "https://api.github.com/users/antoinemacia/followers",
"following_url": "https://api.github.com/users/antoinemacia/following{/other_user}",
"gists_url": "https://api.github.com/users/antoinemacia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/antoinemacia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antoinemacia/subscriptions",
"organizations_url": "https://api.github.com/users/antoinemacia/orgs",
"repos_url": "https://api.github.com/users/antoinemacia/repos",
"events_url": "https://api.github.com/users/antoinemacia/events{/privacy}",
"received_events_url": "https://api.github.com/users/antoinemacia/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I have similar issue when i run \r\n\r\n```\r\nimg_url = \"https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png\"\r\nraw_image = Image.open(requests.get(img_url, stream=True).raw).convert(\"RGB\")\r\ninput_points = [[[450, 600]]] # 2D location of a window in the image\r\n\r\ninputs = processor(raw_image, input_points=input_points, return_tensors=\"pt\").to(device)\r\noutputs = model(**inputs)\r\n```\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\n<ipython-input-6-abdc2d7068b8> in <module>\r\n 4 \r\n 5 inputs = processor(raw_image, input_points=input_points, return_tensors=\"pt\").to(device)\r\n----> 6 outputs = model(**inputs)\r\n 7 \r\n 8 masks = processor.image_processor.post_process_masks(\r\n\r\n~/miniconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)\r\n 548 result = self._slow_forward(*input, **kwargs)\r\n 549 else:\r\n--> 550 result = self.forward(*input, **kwargs)\r\n 551 for hook in self._forward_hooks.values():\r\n 552 hook_result = hook(self, input, result)\r\n\r\n~/miniconda3/envs/pytorch/lib/python3.8/site-packages/transformers/models/sam/modeling_sam.py in forward(self, pixel_values, input_points, input_labels, input_boxes, input_masks, image_embeddings, multimask_output, output_attentions, output_hidden_states, return_dict, **kwargs)\r\n 1331 )\r\n 1332 \r\n-> 1333 sparse_embeddings, dense_embeddings = self.prompt_encoder(\r\n 1334 input_points=input_points,\r\n 1335 input_labels=input_labels,\r\n\r\n~/miniconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)\r\n 548 result = self._slow_forward(*input, **kwargs)\r\n 549 else:\r\n--> 550 result = self.forward(*input, **kwargs)\r\n 551 for hook in self._forward_hooks.values():\r\n 552 hook_result = hook(self, input, result)\r\n\r\n~/miniconda3/envs/pytorch/lib/python3.8/site-packages/transformers/models/sam/modeling_sam.py in forward(self, input_points, input_labels, input_boxes, input_masks)\r\n 669 if input_labels is None:\r\n 670 raise ValueError(\"If points are provided, labels must also be provided.\")\r\n--> 671 point_embeddings = self._embed_points(input_points, input_labels, pad=(input_boxes is None))\r\n 672 sparse_embeddings = torch.empty((batch_size, point_batch_size, 0, self.hidden_size), device=target_device)\r\n 673 sparse_embeddings = torch.cat([sparse_embeddings, point_embeddings], dim=2)\r\n\r\n~/miniconda3/envs/pytorch/lib/python3.8/site-packages/transformers/models/sam/modeling_sam.py in _embed_points(self, points, labels, pad)\r\n 619 padding_point = torch.zeros(target_point_shape, device=points.device)\r\n 620 padding_label = -torch.ones(target_labels_shape, device=labels.device)\r\n--> 621 points = torch.cat([points, padding_point], dim=2)\r\n 622 labels = torch.cat([labels, padding_label], dim=2)\r\n 623 input_shape = (self.input_image_size, self.input_image_size)\r\n\r\nRuntimeError: Expected object of scalar type double but got scalar type float for sequence element 1.\r\n```\r\n\r\n```\r\n\r\n- `transformers` version: 4.29.0.dev0\r\n- Platform: Linux-3.10.0-957.12.2.el7.x86_64-x86_64-with-glibc2.10\r\n- Python version: 3.8.3\r\n- Huggingface_hub version: 0.13.4\r\n- Safetensors version: not installed\r\n- PyTorch version (GPU?): 1.5.0 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n```",
"cc @younesbelkada @ArthurZucker ",
"Thanks for reporting! Will fix this asap",
"Same here.\r\n\r\nTypeError: upsample_bilinear2d() received an invalid combination of arguments - got (Tensor, list, bool, NoneType),\r\nbut expected one of:\r\n * (Tensor input, tuple of ints output_size, bool align_corners, tuple of floats scale_factors)\r\n didn't match because some of the arguments have invalid types: (Tensor, !list!, bool, !NoneType!)\r\n * (Tensor input, tuple of ints output_size, bool align_corners, float scales_h, float scales_w, *, Tensor out)",
"Hi @antoinemacia @xiao2mo \r\nI can confirm now the colab scripts works as expected if you re-install the library from source!\r\n@YubinXie could you open another ticket for your issue to keep track of it? \r\nHave a great weekend everyone!",
"@younesbelkada @ArthurZucker its working on my end, thanks for looking at it so promptly π \r\n\r\nGood week end yall!"
] | 1,682 | 1,682 | 1,682 |
NONE
| null |
### System Info
- `transformers` version: 4.29.0.dev0
- Platform: macOS-13.2-arm64-arm-64bit
- Python version: 3.10.6
- Huggingface_hub version: 0.13.4
- Safetensors version: 0.3.0
- PyTorch version (GPU?): 1.13.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (cpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
Dependencies
- torch = 1.13.0
- numpy = 1.23.4
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Pull [SAM Notebook example](https://github.com/huggingface/notebooks/blob/main/examples/segment_anything.ipynb)
2. Run notebook up until
```
masks = processor.image_processor.post_process_masks(outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu())
```
3. Get error
```
TypeError: upsample_bilinear2d() received an invalid combination of arguments - got (Tensor, list, bool, NoneType), but expected one of:
* (Tensor input, tuple of SymInts output_size, bool align_corners, tuple of floats scale_factors)
didn't match because some of the arguments have invalid types: (Tensor, !list!, bool, !NoneType!)
* (Tensor input, tuple of SymInts output_size, bool align_corners, float scales_h, float scales_w, *, Tensor out)
```
### Expected behavior
original_sizes/output_sizes to be of the expected type, is this a dependency issue?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22904/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 2,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22904/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22903
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22903/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22903/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22903/events
|
https://github.com/huggingface/transformers/issues/22903
| 1,677,511,874 |
I_kwDOCUB6oc5j_MzC
| 22,903 |
Pix2Struct: unable to overfit on a single training sample
|
{
"login": "arnaudstiegler",
"id": 26485052,
"node_id": "MDQ6VXNlcjI2NDg1MDUy",
"avatar_url": "https://avatars.githubusercontent.com/u/26485052?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arnaudstiegler",
"html_url": "https://github.com/arnaudstiegler",
"followers_url": "https://api.github.com/users/arnaudstiegler/followers",
"following_url": "https://api.github.com/users/arnaudstiegler/following{/other_user}",
"gists_url": "https://api.github.com/users/arnaudstiegler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arnaudstiegler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arnaudstiegler/subscriptions",
"organizations_url": "https://api.github.com/users/arnaudstiegler/orgs",
"repos_url": "https://api.github.com/users/arnaudstiegler/repos",
"events_url": "https://api.github.com/users/arnaudstiegler/events{/privacy}",
"received_events_url": "https://api.github.com/users/arnaudstiegler/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi thanks for the detailed report, indeed this seems weird. I will have a look at it once I am back on Tuesday. \r\ncc also @NielsRogge and @nbroad1881 for visibility as they have been also working on fine-tuning Pix2struct",
"Thank you! Let me know if there's anything I can help with :) ",
"Yeah I had a hard time fine-tuning Pix2Struct myself. However looking at your code snippet, when you encode the target sequence:\r\n```\r\nfrom transformers import Pix2StructProcessor\r\n\r\nprocessor = Pix2StructProcessor.from_pretrained(\"google/pix2struct-base\")\r\n\r\ndummy_target = \"The model should overfit this sentence\"\r\nencoded_text = processor(text=dummy_target, return_tensors='pt', max_length=20)\r\n```\r\nthen when decoding back to text:\r\n```\r\nprocessor.decode(encoded_text.input_ids.squeeze())\r\n```\r\nprints:\r\n```\r\n'The model should overfit this sentence'\r\n```\r\nSo this target sequence doesn't contain an EOS (end-of-sequence) token nor a BOS (beginning-of-sequence) token. Hence, when generating text using the `generate()` method, it will just continue predicting tokens, at this method only stops generating text when the model predicts the EOS token. As the model is trained to not produce the EOS token, it simply will keep on generating text (hence you're getting '<pad> The model should overfit this sentence should overfit this sentence' etc.). Also it looks like the first token is `<pad>` since the model's BOS token is equal to the pad token, so you'll need to add `skip_special_tokens=True` to the `batch_decode` method.\r\n\r\nSo cc @younesbelkada we'll need to check that, in case the user sets the max length to 20, then the tokenizer should set the EOS token as last token appropriately. It looks like the processor's tokenizer has this set:\r\n\r\n```\r\n>>> processor.tokenizer.eos_token\r\n'</s>'\r\n```\r\n",
"Oh yeah, you're right! Completely missed it, and it does solve the generation issue after 50 steps basically.\r\n\r\n```\r\nstep: 0 train_loss: 8.3875150680542 prediction: ['<pad> <img_alt=Tokyo is the cure for everything. img_src=']\r\nstep: 50 train_loss: 2.020235300064087 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 100 train_loss: 2.0110490322113037 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 150 train_loss: 1.728605031967163 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 200 train_loss: 1.678179144859314 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 250 train_loss: 1.6586235761642456 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 300 train_loss: 1.6816842555999756 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 350 train_loss: 1.6198171377182007 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 400 train_loss: 1.6187334060668945 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 450 train_loss: 1.6846977472305298 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 500 train_loss: 1.6047543287277222 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 550 train_loss: 1.585425853729248 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 600 train_loss: 1.5750995874404907 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 650 train_loss: 1.5516695976257324 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 700 train_loss: 1.5205081701278687 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 750 train_loss: 1.600045919418335 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 800 train_loss: 1.5451548099517822 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 850 train_loss: 1.602522373199463 prediction: ['<pad> The model should overfit this sentence</s>']\r\n```\r\n\r\nI think what remains weird is that the loss doesn't decrease below 1.5 even with that single training sample. \r\n\r\nAnecdotally, I've been trying to fine-tune for some information extraction tasks, and I haven't been able to make it properly learn anything (I did check that there's an eos token in my labels when fine-tuning :) )\r\n",
"Indeed, the loss should go down to 0. I notice 2 things here:\r\n\r\n* I see label smoothing is used which is pretty uncommon: https://github.com/huggingface/transformers/blob/7579a52b55611ba7651b6d05cba6f45539a6089d/src/transformers/models/pix2struct/modeling_pix2struct.py#L1557 According to PyTorch's [docs](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html): \"The targets become a mixture of the original ground truth and a uniform distribution\" Might explain this behaviour. @younesbelkada I assume you included this to comply to the original implementation?\r\n* [this line](https://github.com/huggingface/transformers/blob/7579a52b55611ba7651b6d05cba6f45539a6089d/src/transformers/models/pix2struct/modeling_pix2struct.py#L1558) should be removed: it's the user's responsability to set the labels to -100 for padding tokens. To comply to the design of any other model in the library, this line should not be there",
"Good catch, just tried without the label smoothing and the losses now look much more normal:\r\n\r\n```\r\nstep: 0 train_loss: 7.458827972412109 prediction: ['<pad> <img_alt=Towards a New Vision: A Vision for a New World Order']\r\nstep: 50 train_loss: 0.12852047383785248 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 100 train_loss: 0.010209576226770878 prediction: ['<pad> The Model should overfit this sentence</s>']\r\nstep: 150 train_loss: 0.0012781125260517001 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 200 train_loss: 0.014641670510172844 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 250 train_loss: 6.366522575262934e-05 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 300 train_loss: 0.0005338654736988246 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 350 train_loss: 0.004032869823276997 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 400 train_loss: 3.196050602127798e-05 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 450 train_loss: 1.0058114639832638e-05 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 500 train_loss: 1.513927782070823e-05 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 550 train_loss: 4.767631980939768e-05 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 600 train_loss: 0.005966411903500557 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 650 train_loss: 9.983758673115517e-07 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 700 train_loss: 2.6761419576359913e-05 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 750 train_loss: 0.03052591346204281 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 800 train_loss: 0.00021442778233904392 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 850 train_loss: 4.1449759009992704e-05 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 900 train_loss: 0.0005854590563103557 prediction: ['<pad> The model should overfit this sentence</s>']\r\nstep: 950 train_loss: 6.643687083851546e-05 prediction: ['<pad> The model should overfit this sentence</s>']\r\n```",
"Damn not sure why I didn't check the code of the loss calculation before training a model myself π hopefully this will also solve the fine-tuning runs on larger datasets",
"Trying it right now! Will keep you updated once I got the results back :) ",
"From my experiment, the training loss on larger datasets is indeed getting much lower (expected) but it doesn't seem to be solving the issue. ",
"Thanks everyone for digging into that, I feel we are closing solving the issue, so I propose we first address\r\n\r\nhttps://github.com/huggingface/transformers/issues/22903#issuecomment-1518275840\r\n\r\nInto a PR, so that at least the loss behaves more \"normally\". \r\n@arnaudstiegler , how much lower does the loss decreases compared than previous runs? Any curves/stats you can share? \r\nThinking it loud, I was wondering if your ultimate issue is not a hyper parameter issue. ",
"Losses overall look okay (with and without the label smoothing), but there seems to be some disconnect between the loss (both training and validation) value I'm getting and the actual quality of the predicted string. A priori, that might indicate a bug somewhere in my training workflow, but I did check it thoroughly.\r\nI also did a bunch of experiments on a single training batch, and as you reported in the notebook, the model can collapse with the wrong hyperparameters, esp. if the target is a long string. Adding some warmup seems to help, but it still behaves in a surprising way even on a single training sample.\r\n\r\nI'm actually trying to swap out Donut for Pix2Struct, and the Donut model hasn't shown any of the behavior or brittleness I'm seeing with Pix2Struct. You're probably right that there might be some hyperparameter issue, but given the \"limited\" size of the model, I'm really surprised that it's so sensitive to HPs. \r\nWould love to hear other people experience with fine-tuning Pix2Struct",
"I have also been trying to finetune pix2struct. I find that the losses go to zero very quickly which made me suspect that the attention masks are not being set properly. \r\n\r\nWhat I see is that in the `Pix2StructText` module, `self.config.is_decoder` is set to `False`, causing [this line](https://github.com/huggingface/transformers/blob/7579a52b55611ba7651b6d05cba6f45539a6089d/src/transformers/models/pix2struct/modeling_pix2struct.py#L1452) to output a non-causal attention mask.\r\n\r\nIf I add the line `self.config.is_decoder = True` to the line above that to force it to be a decoder things look more normal.",
"Interesting! \r\n@arnaudstiegler can you try on your side this potential fix and let us know how it goes?",
"Yeah, the model seems to be learning well on >3k images dataset with the change on the decoder config. This seems to be the root cause. Really good catch @gbarello-uipath :) ",
"Glad its working for you @arnaudstiegler!\r\n\r\nI don't have a lot of experience in the guts of the transformers repo (hence my hacky fix inside the forward function :) - could someone point me to the \"right\" place to make that fix? I looked into the `configuration_pix2struct.py` file, but haven't found the time yet to really dig down and actually fix it properly.",
"This is really cool! \r\n@gbarello-uipath , I believe you would need to add `is_decoder=True` key word argument here: https://github.com/huggingface/transformers/blob/c2c99dc7ef5edab8f7674a1eb00cf6ac6996fd0f/src/transformers/models/pix2struct/configuration_pix2struct.py#L121 \r\nAnd also add it here as well (`is_decoder=is_decoder`) to fix the failing CI issues: https://github.com/huggingface/transformers/blob/c2c99dc7ef5edab8f7674a1eb00cf6ac6996fd0f/src/transformers/models/pix2struct/configuration_pix2struct.py#L147 \r\nThen `get_attention_mask` should be called properly as expected. I would also advise to double check again everything works just in case",
"Let us know when you will open a Pull Request for that! Otherwise happy to do it as well",
"I would love to be an official contributor, even if its just a one-line code change π
I will put together a PR shortly.",
"Awesome! Thanks again for the fix",
"Ok so I am working on this PR. It works fine when instantiating a brand new model, but when loading any of the pretrained models the `is_decoder=False` flag is saved in them already so the default kwarg gets overwritten. \r\n\r\nI suppose there isn't really a way for me to fix that directly. Only thing I can think of is to load the model, manually fix the config, and then push that new model to the hub. Is that the best way to fix the pretrained models?\r\n",
"I see, the other solution would probably to update the `get_extended_mask` method to accept a new optional argument to force the decoder-lik behavior , but I am not sure if this is the right fix. If the only solution is to update the models that are on the Hub I am happy to update them, cc @sgugger ",
"I think the pretrained model configs should be fixed directly.",
"Ok @younesbelkada I created the PR: https://github.com/huggingface/transformers/pull/23051\r\n\r\nHopefully I have done everything correctly :)\r\n\r\nIf there is a way for me to also fix the pre-trained model configs let me know, otherwise let me know when they are fixed!",
"Let's close this issue as we merged #23051 !\r\n@NielsRogge has also made a nice tutorial in https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Pix2Struct \r\nThanks everyone",
"@younesbelkada I shared a [notebook](https://www.kaggle.com/code/alejopaullier/benetech-matcha-train-0-74) on how to train Matcha/Pix2Struct model for Kaggle's Benetech competition, in case anyone is interested. This model achieved silver zone and includes the updates with the fix. ",
"Thanks very much for sharing! It is really cool to see Matcha/Pix2Struct being using for winning notebooks in major kaggle competitions π₯ "
] | 1,682 | 1,687 | 1,684 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.28.0
- Platform: Linux-5.4.0-1037-aws-x86_64-with-glibc2.27
- Python version: 3.9.16
- Huggingface_hub version: 0.13.4
- Safetensors version: 0.3.0
- PyTorch version (GPU?): 1.13.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Here's the minimal training loop:
```
import requests
from PIL import Image
from transformers import Pix2StructForConditionalGeneration, AutoProcessor
from torch.optim import AdamW
import torch
torch.manual_seed(42)
model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-base")
processor = AutoProcessor.from_pretrained("google/pix2struct-base")
dummy_target = "The model should overfit this sentence"
image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"
image = Image.open(requests.get(image_url, stream=True).raw)
encoded_image = processor(images=image, return_tensors="pt")
encoded_text = processor(text=dummy_target, return_tensors='pt', max_length=20)
optimizer = AdamW(model.parameters(), lr=1e-4)
model.train()
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model.to(device)
flattened_patches=encoded_image.flattened_patches.to(device)
attention_mask=encoded_image.attention_mask.to(device)
labels=encoded_text.input_ids.to(device)
for i in range(1000):
outputs = model(
flattened_patches=flattened_patches,
attention_mask=attention_mask,
labels=labels
)
loss = outputs.loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
if i % 50 == 0:
model.eval()
prediction = model.generate(
flattened_patches=flattened_patches,
attention_mask=attention_mask)
print(f'step: {i} train_loss: {loss.item()} prediction: {processor.batch_decode(prediction)}')
model.train()
```
Here's the output I got:
```
step: 0 train_loss: 8.259493827819824 prediction: ['<pad> <img_src=cropped-img-20180924']
step: 50 train_loss: 1.9695181846618652 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 100 train_loss: 2.071323871612549 prediction: ['<pad> <The model should overfit this sentence should overfit this sentence should overfit this sentence should']
step: 150 train_loss: 2.0366554260253906 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 200 train_loss: 1.8225889205932617 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 250 train_loss: 1.6568734645843506 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 300 train_loss: 1.6770282983779907 prediction: ['<pad> The model should overfit this sentence sentence should overfit this sentence sentence should overfit this sentence']
step: 350 train_loss: 1.688515067100525 prediction: ['<pad> The model should overfit this sentence sentence overfit this sentence sentence overfit this sentence sentence over']
step: 400 train_loss: 1.6118296384811401 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 450 train_loss: 1.6204414367675781 prediction: ['<pad> The model should overfit this sentence sentence should overfit this sentence should overfit this sentence should']
step: 500 train_loss: 1.59645676612854 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 550 train_loss: 1.5818239450454712 prediction: ['<pad> The model should overfit this sentence sentence sentence sentence sentence sentence sentence sentence sentence sentence sentence sentence sentence']
step: 600 train_loss: 1.5775129795074463 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 650 train_loss: 1.561257243156433 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 700 train_loss: 1.5319150686264038 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 750 train_loss: 1.646193504333496 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 800 train_loss: 1.533736228942871 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 850 train_loss: 1.6203268766403198 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 900 train_loss: 1.5132172107696533 prediction: ['<pad> The model should overfit this sentence sentence should overfit this sentence sentence should overfit this sentence']
step: 950 train_loss: 1.491452693939209 prediction: ['<pad> The model should overfit this sentence The model should overfit this sentence The model should overfit']
```
### Expected behavior
I've been trying to fine-tune Pix2Struct starting from the base pretrained model, and have been unable to do so. The model collapses consistently and fails to overfit on that single training sample.
I noticed a comment about this on the fine-tuning notebook: https://github.com/huggingface/notebooks/blob/main/examples/image_captioning_pix2struct.ipynb
> Let's train the model! Run the simply the cell below for training the model. We have observed that finding the best hyper-parameters was quite challenging and required a lot of trials and errors, as the model can easily enter in "collapse-model" (always predicting the same output, no matter the input) if the HP are not chosen correctly. In this example, we found out that using AdamW optimizer with lr=1e-5 seemed to be the best approach.
To dig a little deeper, I've been trying to train on a single training sample with a minimal training loop, and see whether the model was able to correctly learn that single training sample. It seems that it's not able to overfit on a single training sample after 1000 training steps. Unless I missed something in my training loop, that seems like a weird behavior and might be a symptom of a bug somewhere?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22903/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22902
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22902/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22902/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22902/events
|
https://github.com/huggingface/transformers/issues/22902
| 1,677,386,335 |
I_kwDOCUB6oc5j-uJf
| 22,902 |
Running GLUE example failed since Apr 17
|
{
"login": "jingyanwangms",
"id": 47403504,
"node_id": "MDQ6VXNlcjQ3NDAzNTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/47403504?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jingyanwangms",
"html_url": "https://github.com/jingyanwangms",
"followers_url": "https://api.github.com/users/jingyanwangms/followers",
"following_url": "https://api.github.com/users/jingyanwangms/following{/other_user}",
"gists_url": "https://api.github.com/users/jingyanwangms/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jingyanwangms/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jingyanwangms/subscriptions",
"organizations_url": "https://api.github.com/users/jingyanwangms/orgs",
"repos_url": "https://api.github.com/users/jingyanwangms/repos",
"events_url": "https://api.github.com/users/jingyanwangms/events{/privacy}",
"received_events_url": "https://api.github.com/users/jingyanwangms/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This is the commit that seems to cause the issue: https://github.com/huggingface/transformers/commit/03462875cc2d6506eb66a74de7d19b93ce968596",
"@jingyanwangms Thanks for raising this issue! \r\n\r\nThere was an issue that occurred on the development branch with the introduction of PartialState from accelerate and was reported here: #22816, which is likely related. Could you share more information about the running environment, specifically sharing the output of running `transformers-cli env`?",
"For me, updating `accelerate` via `pip install git+https://github.com/huggingface/accelerate ` solved it.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,685 | 1,685 |
NONE
| null |
### System Info
We have continuous monitoring that runs latest huggingface models to benchmark performance, and below script failed since Apr 17
python -m torch.distributed.launch --nproc_per_node=8 /workspace/transformers/examples/pytorch/text-classification/run_glue.py --model_name_or_path microsoft/deberta-large --task_name MRPC --max_seq_length 128 --learning_rate 3e-6 --do_train --output_dir /dev/shm --overwrite_output_dir --max_steps 200 --logging_steps 20 --per_device_train_batch_size 32 --fp16
It should be a check in between Apr 16 9:33 PM and Apr 17 4:01 PM PST.
torch 1.14.0.dev20221213+cu116
hugginface install from source at whatever timestamp
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Will add conda environment info later
python -m torch.distributed.launch --nproc_per_node=8 /workspace/transformers/examples/pytorch/text-classification/run_glue.py --model_name_or_path microsoft/deberta-large --task_name MRPC --max_seq_length 128 --learning_rate 3e-6 --do_train --output_dir /dev/shm --overwrite_output_dir --max_steps 200 --logging_steps 20 --per_device_train_batch_size 32 --fp16
### Expected behavior
above program succeed, instead here's the error
Traceback (most recent call last):
File "/workspace/transformers/examples/pytorch/text-classification/run_glue.py", line 626, in <module>
main()
File "/workspace/transformers/examples/pytorch/text-classification/run_glue.py", line 217, in main
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers-4.29.0.dev0-py3.8.egg/transformers/hf_argparser.py", line 332, in parse_args_into_dataclasses
obj = dtype(**inputs)
File "<string>", line 110, in __init__
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers-4.29.0.dev0-py3.8.egg/transformers/training_args.py", line 1255, in __post_init__
and (self.device.type != "cuda")
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers-4.29.0.dev0-py3.8.egg/transformers/training_args.py", line 1615, in device
return self._setup_devices
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers-4.29.0.dev0-py3.8.egg/transformers/utils/generic.py", line 54, in __get__
cached = self.fget(obj)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers-4.29.0.dev0-py3.8.egg/transformers/training_args.py", line 1549, in _setup_devices
self.distributed_state = PartialState(backend=self.xpu_backend)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/accelerate/state.py", line 129, in __init__
torch.distributed.init_process_group(backend="nccl", **kwargs)
TypeError: init_process_group() got multiple values for keyword argument 'backend'
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22902/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22902/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22901
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22901/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22901/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22901/events
|
https://github.com/huggingface/transformers/pull/22901
| 1,677,351,576 |
PR_kwDOCUB6oc5OzQBj
| 22,901 |
Fix Slack report for Nightly CI and Past CI
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
COLLABORATOR
| null |
# What does this PR do?
For these 2 CI, so for we get
```bash
Single | Multi | Category
0 | 0 | [Errored out] Examples directory
0 | 0 | [Errored out] PyTorch pipelines
0 | 0 | [Errored out] TensorFlow pipelines
...
```
But they don't have these 3 jobs in their workflow. We just need to update the notification script.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22901/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22901",
"html_url": "https://github.com/huggingface/transformers/pull/22901",
"diff_url": "https://github.com/huggingface/transformers/pull/22901.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22901.patch",
"merged_at": 1682068996000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22900
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22900/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22900/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22900/events
|
https://github.com/huggingface/transformers/pull/22900
| 1,677,228,432 |
PR_kwDOCUB6oc5Oy1LR
| 22,900 |
Luke
|
{
"login": "sushmanthreddy",
"id": 73489688,
"node_id": "MDQ6VXNlcjczNDg5Njg4",
"avatar_url": "https://avatars.githubusercontent.com/u/73489688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushmanthreddy",
"html_url": "https://github.com/sushmanthreddy",
"followers_url": "https://api.github.com/users/sushmanthreddy/followers",
"following_url": "https://api.github.com/users/sushmanthreddy/following{/other_user}",
"gists_url": "https://api.github.com/users/sushmanthreddy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushmanthreddy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushmanthreddy/subscriptions",
"organizations_url": "https://api.github.com/users/sushmanthreddy/orgs",
"repos_url": "https://api.github.com/users/sushmanthreddy/repos",
"events_url": "https://api.github.com/users/sushmanthreddy/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushmanthreddy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22900). All of your documentation changes will be reflected on that endpoint."
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22900/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22900/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22900",
"html_url": "https://github.com/huggingface/transformers/pull/22900",
"diff_url": "https://github.com/huggingface/transformers/pull/22900.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22900.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22899
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22899/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22899/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22899/events
|
https://github.com/huggingface/transformers/pull/22899
| 1,677,186,183 |
PR_kwDOCUB6oc5OysoH
| 22,899 |
Revert DeepSpeed stuff
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22899). All of your documentation changes will be reflected on that endpoint.",
"I see this was included in transformers 4.29.0 (https://github.com/huggingface/transformers/releases/tag/v4.29.0). Could you share more about how this changes the Transformers + DeepSpeed integration? I don't quite understand the diff. Does this disable some deeper level of integration of DS with Transformers?",
"@jli this pr just reverted a small portion of Accelerate handling the deepspeed part when we weren't ready for that yet. CC @pacman100 if you could explain accelerates deepspeed integration vs the transformers one were replacing in terms of features? :)"
] | 1,682 | 1,683 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
During the integration some deepspeed items that need further looks into before it can be part of the integration need to be addressed/looked at more carefully. This PR reverts the base deepspeed logic done in `setup_devices` and `parallel_mode` to restore the original deepspeed behavior
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger, cc @pacman100 so you're aware.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22899/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22899",
"html_url": "https://github.com/huggingface/transformers/pull/22899",
"diff_url": "https://github.com/huggingface/transformers/pull/22899.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22899.patch",
"merged_at": 1682015040000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22898
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22898/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22898/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22898/events
|
https://github.com/huggingface/transformers/pull/22898
| 1,677,175,866 |
PR_kwDOCUB6oc5OyqX5
| 22,898 |
moved labels to the same device as logits for LILT model
|
{
"login": "sushmanthreddy",
"id": 73489688,
"node_id": "MDQ6VXNlcjczNDg5Njg4",
"avatar_url": "https://avatars.githubusercontent.com/u/73489688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushmanthreddy",
"html_url": "https://github.com/sushmanthreddy",
"followers_url": "https://api.github.com/users/sushmanthreddy/followers",
"following_url": "https://api.github.com/users/sushmanthreddy/following{/other_user}",
"gists_url": "https://api.github.com/users/sushmanthreddy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushmanthreddy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushmanthreddy/subscriptions",
"organizations_url": "https://api.github.com/users/sushmanthreddy/orgs",
"repos_url": "https://api.github.com/users/sushmanthreddy/repos",
"events_url": "https://api.github.com/users/sushmanthreddy/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushmanthreddy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
As suggested in the [#22561](https://github.com/huggingface/transformers/issues/22561) moved labels to the same device as logits for the lilt model.
@sgugger pls review and merge it in the main branch.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22898/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22898",
"html_url": "https://github.com/huggingface/transformers/pull/22898",
"diff_url": "https://github.com/huggingface/transformers/pull/22898.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22898.patch",
"merged_at": 1682016587000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22897
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22897/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22897/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22897/events
|
https://github.com/huggingface/transformers/pull/22897
| 1,677,017,022 |
PR_kwDOCUB6oc5OyJMG
| 22,897 |
Flax whisper gradient checkpointing
|
{
"login": "versae",
"id": 173537,
"node_id": "MDQ6VXNlcjE3MzUzNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/173537?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/versae",
"html_url": "https://github.com/versae",
"followers_url": "https://api.github.com/users/versae/followers",
"following_url": "https://api.github.com/users/versae/following{/other_user}",
"gists_url": "https://api.github.com/users/versae/gists{/gist_id}",
"starred_url": "https://api.github.com/users/versae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/versae/subscriptions",
"organizations_url": "https://api.github.com/users/versae/orgs",
"repos_url": "https://api.github.com/users/versae/repos",
"events_url": "https://api.github.com/users/versae/events{/privacy}",
"received_events_url": "https://api.github.com/users/versae/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"At the moment, the model loads fine but I then get a weird error when training or generating:\r\n\r\n```python\r\nβ /data/venvflax/lib/python3.8/site-packages/transformers/models/whisper/modeling_flax_whisper.py: β\r\nβ 520 in __call__ β\r\nβ β\r\nβ 517 β β β residual = hidden_states β\r\nβ 518 β β β β\r\nβ 519 β β β hidden_states = self.encoder_attn_layer_norm(hidden_states) β\r\nβ β± 520 β β β hidden_states, cross_attn_weights = self.encoder_attn( β\r\nβ 521 β β β β hidden_states=hidden_states, β\r\nβ 522 β β β β key_value_states=encoder_hidden_states, β\r\nβ 523 β β β β attention_mask=encoder_attention_mask, β\r\nβ β\r\nβ /data/venvflax/lib/python3.8/site-packages/transformers/models/whisper/modeling_flax_whisper.py: β\r\nβ 256 in __call__ β\r\nβ β\r\nβ 253 β β elif self.causal: β\r\nβ 254 β β β attention_mask = causal_mask β\r\nβ 255 β β elif attention_mask is not None: β\r\nβ β± 256 β β β attention_mask = jnp.expand_dims(attention_mask, axis=(-3, -2)) β\r\nβ 257 β β β\r\nβ 258 β β # During fast autoregressive decoding, we feed one position at a time, β\r\nβ 259 β β # and cache the keys and values step by step. β\r\nβ β\r\nβ /data/venvflax/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py:896 in expand_dims β\r\nβ β\r\nβ 893 axis = _ensure_index_tuple(axis) β\r\nβ 894 if hasattr(a, \"expand_dims\"): β\r\nβ 895 β return a.expand_dims(axis) β\r\nβ β± 896 return lax.expand_dims(a, axis) β\r\nβ 897 β\r\nβ 898 β\r\nβ 899 @_wraps(np.swapaxes, lax_description=_ARRAY_VIEW_DOC) β\r\nβ°βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ―\r\nValueError: axis -3 is out of bounds for array of dimension 2\r\n```\r\nI'm not sure what's happening. So I thought maybe @sanchit-gandhi could provide some feedback :) ",
"_The documentation is not available anymore as the PR was closed or merged._",
"I've been digging and the only difference I can find is that for some reason the parameters for calling `FlaxWhisperDecoderLayerCollection.__call__()` in `FlaxWhisperDecoder.__call__()` are different in this PR's model than in the original implementation. I tested this using a tiny model\r\n\r\nOriginal model\r\n```python\r\nencoder_attention_mask=None\r\ndeterministic=True\r\noutput_hidden_states=False\r\n```\r\nThis PR's model:\r\n```python\r\nencoder_attention_mask=True\r\ndeterministic=False\r\noutput_hidden_states=True\r\n```\r\n\r\nThe rest of params are the same: `hidden_states`, `attention_mask`, `encoder_hidden_states`, `init_cache`, `output_attentions` and `return_dict`. The problem is that while the first decoder layers loads fine, the second one gets an `attention_mask` value of `True` for some reason, making any tensor operation to fail.",
"All passing! The main issue was a missing `self.gradient_checkpointing` in the `FlaxWhisperPreTrainedModel.__init__()` function. Took me forever to debug it.\r\n\r\nI'll clean up the git history mess, but other than that I think it's finally ready :) ",
"Closing in favor of #22954."
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
It uses `flax.linen.remat` and follows on PRs #13657 and #17994.
# What does this PR do?
Adds gradient_checkpointing to Flax Whisper models.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sanchit-gandhi @peregilk
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22897/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22897",
"html_url": "https://github.com/huggingface/transformers/pull/22897",
"diff_url": "https://github.com/huggingface/transformers/pull/22897.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22897.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22896
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22896/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22896/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22896/events
|
https://github.com/huggingface/transformers/pull/22896
| 1,676,981,602 |
PR_kwDOCUB6oc5OyBe6
| 22,896 |
don't pass None kwargs to accelerate as it doesn't handle it nicely
|
{
"login": "winglian",
"id": 381258,
"node_id": "MDQ6VXNlcjM4MTI1OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/381258?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/winglian",
"html_url": "https://github.com/winglian",
"followers_url": "https://api.github.com/users/winglian/followers",
"following_url": "https://api.github.com/users/winglian/following{/other_user}",
"gists_url": "https://api.github.com/users/winglian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/winglian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/winglian/subscriptions",
"organizations_url": "https://api.github.com/users/winglian/orgs",
"repos_url": "https://api.github.com/users/winglian/repos",
"events_url": "https://api.github.com/users/winglian/events{/privacy}",
"received_events_url": "https://api.github.com/users/winglian/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @muellerzr ",
"We've already fixed this in accelerate via https://github.com/huggingface/accelerate/pull/1342. (Now there are more deepspeed things failing, but we're looking into that).\r\n\r\nFor now, as we're working on a very large migration in the trainer, please use the pip release of transformers for stability :) \r\n\r\nOr, install `accelerate` via github with `pip install git+https://github.com/huggingface/accelerate`",
"thanks! "
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes an issue when using deepspeed where `self.xpu_backend` is None, and by passing it to accelerate, it doesn't handle it well.
```
File "/opt/conda/lib/python3.8/site-packages/transformers/training_args.py", line 1550, in _setup_devices
self.distributed_state = PartialState(backend=self.xpu_backend)
File "/opt/conda/lib/python3.8/site-packages/accelerate/state.py", line 117, in __init__
torch.distributed.init_process_group(backend="nccl", **kwargs)
TypeError: init_process_group() got multiple values for keyword argument 'backend'
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22896/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22896/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22896",
"html_url": "https://github.com/huggingface/transformers/pull/22896",
"diff_url": "https://github.com/huggingface/transformers/pull/22896.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22896.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22895
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22895/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22895/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22895/events
|
https://github.com/huggingface/transformers/pull/22895
| 1,676,969,855 |
PR_kwDOCUB6oc5Ox-9h
| 22,895 |
Pin flax & optax version
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
COLLABORATOR
| null |
# What does this PR do?
Failing on [main](https://app.circleci.com/pipelines/github/huggingface/transformers/62534/workflows/d270e074-306d-4a8f-9434-fcdd979fae1b/jobs/770753) because of a new release of [optax](https://github.com/deepmind/optax/releases/tag/v0.1.5). Pinning until compatible versions with jax resolved.
Fixes # (issue)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22895/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22895",
"html_url": "https://github.com/huggingface/transformers/pull/22895",
"diff_url": "https://github.com/huggingface/transformers/pull/22895.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22895.patch",
"merged_at": 1682008215000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22894
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22894/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22894/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22894/events
|
https://github.com/huggingface/transformers/pull/22894
| 1,676,956,761 |
PR_kwDOCUB6oc5Ox8Ni
| 22,894 |
Fix `FillMaskPipelineTests`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
COLLABORATOR
| null |
# What does this PR do?
For some BPE tokenizers, `</w>` is removed during decoding, so `token_str` won't be the same as in `targets`. We need to adjust the test logic.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22894/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22894",
"html_url": "https://github.com/huggingface/transformers/pull/22894",
"diff_url": "https://github.com/huggingface/transformers/pull/22894.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22894.patch",
"merged_at": 1682083005000
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.