url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/23642
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23642/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23642/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23642/events
|
https://github.com/huggingface/transformers/issues/23642
| 1,718,756,794 |
I_kwDOCUB6oc5mciW6
| 23,642 |
Prediction code snippet for graphormer on graph classification tasks.
|
{
"login": "techthiyanes",
"id": 25921035,
"node_id": "MDQ6VXNlcjI1OTIxMDM1",
"avatar_url": "https://avatars.githubusercontent.com/u/25921035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/techthiyanes",
"html_url": "https://github.com/techthiyanes",
"followers_url": "https://api.github.com/users/techthiyanes/followers",
"following_url": "https://api.github.com/users/techthiyanes/following{/other_user}",
"gists_url": "https://api.github.com/users/techthiyanes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/techthiyanes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/techthiyanes/subscriptions",
"organizations_url": "https://api.github.com/users/techthiyanes/orgs",
"repos_url": "https://api.github.com/users/techthiyanes/repos",
"events_url": "https://api.github.com/users/techthiyanes/events{/privacy}",
"received_events_url": "https://api.github.com/users/techthiyanes/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi, could you share the logs that you got at step 3?",
"> Hi, could you share the logs that you got at step 3?\r\n\r\n\r\nNow I'm able to write out prediction code independently.\r\n\r\nCode Snippet :\r\n\r\n# For validation, Kindly opt only one data alone from the dataset.\r\nval_ds1 = val_ds.select(list(range(0,1)))\r\n# Trainer prediction\r\npreds = trainer.predict(val_ds1)\r\n# Output : PredictionOutput(predictions=(array([[ 3.615392, -2.313876]], dtype=float32),\r\n\r\n# Independent prediction code\r\n\r\ninputs = dict()\r\ndataset_processed['train'][0] = val_ds1[0]\r\nimport torch\r\ninputs['input_nodes'] = torch.tensor([dataset_processed['train'][0]['input_nodes']])\r\ninputs['input_edges'] = torch.tensor([dataset_processed['train'][0]['input_edges']])\r\ninputs['attn_bias'] = torch.tensor([dataset_processed['train'][0]['attn_bias']])\r\ninputs['in_degree'] = torch.tensor([dataset_processed['train'][0]['in_degree']])\r\ninputs['out_degree'] = torch.tensor([dataset_processed['train'][0]['out_degree']])\r\ninputs['spatial_pos'] = torch.tensor([dataset_processed['train'][0]['spatial_pos']])\r\ninputs['attn_edge_type'] = torch.tensor([dataset_processed['train'][0]['attn_edge_type']])\r\n\r\n\r\nfrom transformers import GraphormerForGraphClassification\r\n\r\nmodel_checkpoint = \"/content/graph-classification/checkpoint-1029\" # pre-trained model from which to fine-tune\r\n\r\nmodel = GraphormerForGraphClassification.from_pretrained(\r\n model_checkpoint, \r\n num_classes=2,\r\n ignore_mismatched_sizes = True, # provide this in case you're planning to fine-tune an already fine-tuned checkpoint\r\n)\r\nwith torch.no_grad():\r\n logits = model(**inputs).logits\r\n\r\npredicted_class_id = logits.argmax().item()\r\npredicted_class_id,logits\r\n\r\n# Output : (0, tensor([[ 3.6154, -2.3139]]))\r\n\r\nWorks as expected. Hence closing this issue. Thanks a lot."
] | 1,684 | 1,684 | 1,684 |
NONE
| null |
### System Info
Kindly share the code snippet of prediction for graphormer model.
### Who can help?
cc @clefourrier
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1) Train the model for ogl data.
2) Using trainer.predict to get the prediction dataset results.
3) Unable to write common prediction code for validation dataset without trainer.predict
4) This requires trainer.train to get completed every time.
5) Kindly share the snippet similar to normal text classification and other tasks.
### Expected behavior
Prediction code snippet for graphormer on graph classification tasks.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23642/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23642/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23641
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23641/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23641/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23641/events
|
https://github.com/huggingface/transformers/pull/23641
| 1,718,720,587 |
PR_kwDOCUB6oc5Q913N
| 23,641 |
fix: TextIteratorStreamer cannot work with pipeline
|
{
"login": "yuanwu2017",
"id": 34643241,
"node_id": "MDQ6VXNlcjM0NjQzMjQx",
"avatar_url": "https://avatars.githubusercontent.com/u/34643241?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuanwu2017",
"html_url": "https://github.com/yuanwu2017",
"followers_url": "https://api.github.com/users/yuanwu2017/followers",
"following_url": "https://api.github.com/users/yuanwu2017/following{/other_user}",
"gists_url": "https://api.github.com/users/yuanwu2017/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuanwu2017/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuanwu2017/subscriptions",
"organizations_url": "https://api.github.com/users/yuanwu2017/orgs",
"repos_url": "https://api.github.com/users/yuanwu2017/repos",
"events_url": "https://api.github.com/users/yuanwu2017/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuanwu2017/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger , pls help review, thx.",
"@sywangyi ",
"@gante Please help to review.",
"I ran into the same issue and also had another issue about a week ago (which I don't remember now) that was also related to the deepcopy operation. If the sole purpose of the deep copy is to allow calls to `pop` without modifying the dictionary, then shouldn't a shallow copy do?",
"@gante We used the transformers pipeline with TextIteratorStreamer for chatbot, but it raised the deepcopy exception. Can you help to check the issue and review the PR?\r\nThanks a lot.",
"@gante is on vacation, please be patient until he comes back :-)\r\nAlso cc @Narsil ",
"> @gante is on vacation, please be patient until he comes back :-) Also cc @Narsil\r\n\r\nGot it. Thanks.",
"Do I need to fix these test failures which are not related with this patch? @gante ",
"@yuanwu2017 to get rid of the CI errors, please rebase the PR. It is a hard requirement -- LMK if you need instructions to do so :) ",
"@amyeroberts see the [issue here](https://github.com/huggingface/transformers/issues/23785), which explains why this copy is not needed :)",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23641). All of your documentation changes will be reflected on that endpoint."
] | 1,684 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
Deepcopying the TextIteratorStreamer object causes the exception.
# What does this PR do?
Fixes #23552
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23641/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23641/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23641",
"html_url": "https://github.com/huggingface/transformers/pull/23641",
"diff_url": "https://github.com/huggingface/transformers/pull/23641.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23641.patch",
"merged_at": 1686649364000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23640
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23640/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23640/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23640/events
|
https://github.com/huggingface/transformers/issues/23640
| 1,718,694,278 |
I_kwDOCUB6oc5mcTGG
| 23,640 |
Use python generator instead of streamer for generation
|
{
"login": "JamesDConley",
"id": 16070894,
"node_id": "MDQ6VXNlcjE2MDcwODk0",
"avatar_url": "https://avatars.githubusercontent.com/u/16070894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JamesDConley",
"html_url": "https://github.com/JamesDConley",
"followers_url": "https://api.github.com/users/JamesDConley/followers",
"following_url": "https://api.github.com/users/JamesDConley/following{/other_user}",
"gists_url": "https://api.github.com/users/JamesDConley/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JamesDConley/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JamesDConley/subscriptions",
"organizations_url": "https://api.github.com/users/JamesDConley/orgs",
"repos_url": "https://api.github.com/users/JamesDConley/repos",
"events_url": "https://api.github.com/users/JamesDConley/events{/privacy}",
"received_events_url": "https://api.github.com/users/JamesDConley/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false | null |
[] |
[
"cc @gante ",
"This has been mentioned in PR: [2249](https://github.com/huggingface/transformers/pull/22449)\r\nWe need this feature, @gante @oobabooga, can you provide a short script how to try this out when calling `model.generate`, like this function work as a python generator object.",
"@JamesDConley I found this https://huggingface.co/spaces/joaogante/transformers_streaming. I think this could be great start with your problem.",
"@ambiSk This is on @gante roadmap but note he is on vacation for two weeks, so you will have to be a bit patient :-)",
"Hey @JamesDConley @ambiSk -- I agree the generator structure is superior, and that is why you see a warning in the docs saying the existing API is temporary (e.g. [here](https://huggingface.co/docs/transformers/v4.30.0/en/internal/generation_utils#transformers.TextIteratorStreamer)).\r\n\r\nBack when I was exploring the MVP of the feature, I managed to get an iterator going. However, it required significant changes in `.generate`, adding `yield from` statements in a few places and restructuring a few bits so that the tokens could be piped out correctly. The branch is in a very incomplete state (see [here](https://github.com/huggingface/transformers/compare/main...gante:transformers:streamer_yield)), and I don't expect to be able to pick it up in the next ~2 months -- if anyone would like to get their hands dirty, feel free to pick this feature up 🙌 \r\n\r\n(just let me know if you decide to work on it :) ) "
] | 1,684 | 1,688 | null |
NONE
| null |
### Feature request
Add an option for receiving tokens (or similar) as they are generated via a [python generator](https://wiki.python.org/moin/Generators) as an alternative to needing a streamer object.
### Motivation
There is a new feature [streamers](https://huggingface.co/docs/transformers/generation_strategies#streaming) for accessing the tokens being generated during generation. Usage of this object requires you to run some code in parallel while the model.generate function blocks it's current thread. You need to instead have your processing code defined like a callback within the streamer object you are using.
A much simpler interface that solves this same problem is to yield the token sequences as they are generated with a [python generator](https://wiki.python.org/moin/Generators). Below is example usage for either case...
## Proposed Generator Implementation
```
for token in model.generate(**inputs, max_new_tokens=20, yield_tokens=True):
print(f"The next token is {token}")
```
## Current Streamer Implementation
from transformers import AutoModelForCausalLM, TextStreamer
```
class MyStreamer:
def __init__(self):
pass
def put(self, token):
print(f"The next token is {token}")
def end():
pass
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=20)
```
Not only does the generator implementation save on lines of code/simplify syntax, but python generators return iterables which has the benefit of making it easy to use all sorts of existing python tools without modification. For example, you can
### Enumerate
```
for idx, token in enumerate(model.generate(**inputs, max_new_tokens=20, yield_tokens=True)):
print(f"The {idx}'th token is {token}")
```
### Progress bar with TQDM
Progress bar appears in CLI or jupyter notebook, updating in real time
```
for token in tqdm(model.generate(**inputs, max_new_tokens=20, yield_tokens=True)):
my_endpoint.post(token)
```
And there's many many more tools that would easily integrate!
In this case I proposed tokens because it's easier to think about that way, and it matches the current streamer implementation, but it may be easier to implement yielding a list of lists of tokens, since for beam search and similar multiple beams (multiple sequences) are being considered at any given time. This would enable more features on the developer side, esp in the case where you may want to generate multiple sequences in one call. But this is more of a sidenote and either of this or the base implementation would be really awesome.
### Your contribution
I'm not planning to put in a PR anytime soon, but I did have a look through the code before finding the new streamer WIP feature. It seems like it would be fairly easy to implement a version of what I am describing. You just need to add a flag to optionally
```
yield new_token
```
inside each of beam_search, beam_sample, greedy_search, etc- and then update the model.generate wrapper to also optionally yield the results from each of these.
In this case I proposed tokens because it's easier to think about that way, and it matches the current streamer implementation, but it may be easier to implement yielding a list of lists of tokens, since for beam search and such multiple beams are being considered at any given time.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23640/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23640/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/23639
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23639/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23639/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23639/events
|
https://github.com/huggingface/transformers/issues/23639
| 1,718,599,217 |
I_kwDOCUB6oc5mb74x
| 23,639 |
How to generate one token after the other when no past_key_values is returned?
|
{
"login": "junoriosity",
"id": 5286536,
"node_id": "MDQ6VXNlcjUyODY1MzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5286536?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/junoriosity",
"html_url": "https://github.com/junoriosity",
"followers_url": "https://api.github.com/users/junoriosity/followers",
"following_url": "https://api.github.com/users/junoriosity/following{/other_user}",
"gists_url": "https://api.github.com/users/junoriosity/gists{/gist_id}",
"starred_url": "https://api.github.com/users/junoriosity/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/junoriosity/subscriptions",
"organizations_url": "https://api.github.com/users/junoriosity/orgs",
"repos_url": "https://api.github.com/users/junoriosity/repos",
"events_url": "https://api.github.com/users/junoriosity/events{/privacy}",
"received_events_url": "https://api.github.com/users/junoriosity/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This is an encoder model, so you can't use it as a decoder model for generation like this.",
"Hi @sgugger, many thanks for getting back to me. \r\n\r\nSo is there no possibility to use that model for generative purposes?\r\n\r\nAlso, could we have some slight influence on the selection of each individual token?",
"Hi @junoriosity -- only model architectures with decoder blocks can be used for generative purposes, see [this page from our NLP course](https://huggingface.co/learn/nlp-course/chapter1/4?fw=pt#general-architecture). If these concepts are new to you, I'd recommend going through the course, which you can complete in a single day!\r\n\r\n`allenai/scibert_scivocab_uncased`, a BERT model, so it doesn't have a decoder block :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,689 | 1,689 |
NONE
| null |
### System Info
Python: 3.11
transformers==4.29.2
torch==2.0.1
### Who can help?
@ArthurZucker @younesbelkada @gante @sgugger
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModelForCausalLM
device = "cuda"
tokenizer = AutoTokenizer.from_pretrained('allenai/scibert_scivocab_uncased')
model = AutoModelForCausalLM.from_pretrained('allenai/scibert_scivocab_uncased').to(device)
input_sequence = "Hello, I'm a language model,"
inputs = torch.as_tensor(tokenizer.encode(input_sequence)).unsqueeze(0).to(device)
attention_mask = torch.as_tensor(tokenizer(input_sequence).attention_mask).unsqueeze(0).to(device)
past_key_values = None
count = 0
complete_token = []
with torch.no_grad():
while count < 10:
count += 1
print("Iteration no.: " + str(count))
if count > 1:
inputs = input_token
print(inputs.to(device))
print(attention_mask)
print(past_key_values[0][0].shape if past_key_values else None)
model_out = model(input_ids=inputs.to(device), attention_mask=attention_mask, past_key_values=past_key_values)
logits = model_out.logits[:, -1, :]
past_key_values = model_out.past_key_values
topk_values, topk_indices = torch.topk(logits, 5)
log_probs = F.softmax(topk_values, dim=-1)
inputs_in_topk = torch.multinomial(log_probs, num_samples=1, replacement=True)
input_token = torch.gather(topk_indices, 1, inputs_in_topk)
attention_mask = torch.concat((attention_mask, torch.tensor([[1]]).to(attention_mask.device)), dim=1)
complete_token.append(input_token)
```
### Expected behavior
I would like to use Scibert for iterated token generation. Above is my code.
However, we have past_key_values = Null all the time. I tried this approach with other models and past_key_values is not null there. How can I make the iteration work here, such that we have the knowledge of the previous iteration?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23639/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23639/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23638
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23638/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23638/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23638/events
|
https://github.com/huggingface/transformers/pull/23638
| 1,718,566,500 |
PR_kwDOCUB6oc5Q9YNB
| 23,638 |
Image column name missing for CLIP
|
{
"login": "TJKlein",
"id": 7634373,
"node_id": "MDQ6VXNlcjc2MzQzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7634373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TJKlein",
"html_url": "https://github.com/TJKlein",
"followers_url": "https://api.github.com/users/TJKlein/followers",
"following_url": "https://api.github.com/users/TJKlein/following{/other_user}",
"gists_url": "https://api.github.com/users/TJKlein/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TJKlein/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TJKlein/subscriptions",
"organizations_url": "https://api.github.com/users/TJKlein/orgs",
"repos_url": "https://api.github.com/users/TJKlein/repos",
"events_url": "https://api.github.com/users/TJKlein/events{/privacy}",
"received_events_url": "https://api.github.com/users/TJKlein/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23638). All of your documentation changes will be reflected on that endpoint.",
"Hi @TJKlein, thanks for opening this PR. \r\n\r\nCould you provide a bit more information about the issue this is resolving? There isn't a `transform_images` method in the `run_clip.py` script or in the [dataset preparation script](https://huggingface.co/datasets/ydshieh/coco_dataset_script/blob/main/coco_dataset_script.py)",
"Well, there is:\r\nhttps://github.com/huggingface/transformers/blob/28f589f0a46cced297fba46014ee73b862fa247b/examples/pytorch/contrastive-image-text/run_clip.py#L396\r\n\r\nhttps://github.com/huggingface/transformers/blob/28f589f0a46cced297fba46014ee73b862fa247b/examples/pytorch/contrastive-image-text/run_clip.py#L410\r\n\r\nSince the image_column is not a standard feature it will get pruned from trainer. Therefore it has to be specified to the trainer. That's what I did.\r\n\r\n```\r\ntrainer._signature_columns=[\"input_ids\", \"attention_mask\", data_args.image_column]\r\n```",
"@TJKlein Apologies, I completely missed the function, sorry! \r\n\r\nCould you expand on the issue this tackles - perhaps with a code snippet to reproduce? I'm able to run the example script with the following command: \r\n\r\n```python\r\npython examples/pytorch/contrastive-image-text/run_clip.py \\\r\n --output_dir ./clip-roberta-finetuned \\\r\n --model_name_or_path ../clip-roberta \\\r\n --data_dir $PWD/data \\\r\n --dataset_name ydshieh/coco_dataset_script \\\r\n --dataset_config_name=2017 \\\r\n --image_column image_path \\\r\n --caption_column caption \\\r\n --remove_unused_columns=False \\\r\n --do_train --do_eval \\\r\n --per_device_train_batch_size=\"64\" \\\r\n --per_device_eval_batch_size=\"64\" \\\r\n --learning_rate=\"5e-5\" --warmup_steps=\"0\" --weight_decay 0.1 \\\r\n --overwrite_output_dir\r\n```\r\n\r\n`_signature_columns` is a private attribute, and not something we want to modify directly like this. Understanding a bit more about how the issue arises, hopefully we'll be able to find another approach. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,688 | 1,688 |
NONE
| null |
# What does this PR do?
The patch makes sure that the image features do not get removed. Otherwise **transform_images(examples)** will have a **KeyError** for the image_column when trying to transform the image.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23638/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23638/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23638",
"html_url": "https://github.com/huggingface/transformers/pull/23638",
"diff_url": "https://github.com/huggingface/transformers/pull/23638.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23638.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23637
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23637/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23637/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23637/events
|
https://github.com/huggingface/transformers/pull/23637
| 1,718,528,945 |
PR_kwDOCUB6oc5Q9Q4e
| 23,637 |
Fix typo in a parameter name for open llama model
|
{
"login": "aaalexlit",
"id": 116374290,
"node_id": "U_kgDOBu-7Eg",
"avatar_url": "https://avatars.githubusercontent.com/u/116374290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaalexlit",
"html_url": "https://github.com/aaalexlit",
"followers_url": "https://api.github.com/users/aaalexlit/followers",
"following_url": "https://api.github.com/users/aaalexlit/following{/other_user}",
"gists_url": "https://api.github.com/users/aaalexlit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaalexlit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaalexlit/subscriptions",
"organizations_url": "https://api.github.com/users/aaalexlit/orgs",
"repos_url": "https://api.github.com/users/aaalexlit/repos",
"events_url": "https://api.github.com/users/aaalexlit/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaalexlit/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @aaalexlit, thanks for opening this PR! \r\n\r\nAs the model was part of the most recent release, we'll need to ensure that these changes are backwards compatible with the previous configurations. What I would suggest is popping the previous argument `use_memorry_efficient_attention` from the config kwargs during initialization, and using that value if passed in, otherwise defaulting to `use_memory_efficient_attention`. ",
"Thanks for pointing that out, @amyeroberts! Makes all the senses in the world.\r\nI hope I interpreted your recommendation correctly"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
Renames a parameter `use_memorry_efficient_attention` to `use_memory_efficient_attention`
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23637/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23637/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23637",
"html_url": "https://github.com/huggingface/transformers/pull/23637",
"diff_url": "https://github.com/huggingface/transformers/pull/23637.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23637.patch",
"merged_at": 1684843079000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23630
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23630/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23630/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23630/events
|
https://github.com/huggingface/transformers/issues/23630
| 1,718,498,828 |
I_kwDOCUB6oc5mbjYM
| 23,630 |
causalLM Text generation with OPT models give weird results
|
{
"login": "SteffenBauer",
"id": 5973070,
"node_id": "MDQ6VXNlcjU5NzMwNzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5973070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SteffenBauer",
"html_url": "https://github.com/SteffenBauer",
"followers_url": "https://api.github.com/users/SteffenBauer/followers",
"following_url": "https://api.github.com/users/SteffenBauer/following{/other_user}",
"gists_url": "https://api.github.com/users/SteffenBauer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SteffenBauer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SteffenBauer/subscriptions",
"organizations_url": "https://api.github.com/users/SteffenBauer/orgs",
"repos_url": "https://api.github.com/users/SteffenBauer/repos",
"events_url": "https://api.github.com/users/SteffenBauer/events{/privacy}",
"received_events_url": "https://api.github.com/users/SteffenBauer/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @younesbelkada @gante ",
"hi @SteffenBauer \r\nthanks for the issue!\r\nI just checked on our daily CI tests that also checks the generations on OPT and there seem to be no issue on our side. What I can see is that you are using a version of torch that sounds a bit exotic `2.0.0a0+fe05266f.nv23.4`\r\nDoes this behavior happen with OPT only or with other models as well?",
"Hi @younesbelkada \r\nthanks for checking this! \r\n\r\nThe torch version is a special package provided by Nvidia, compiled with platform-dependent patches for the Jetson platform. (https://developer.nvidia.com/embedded/downloads)\r\n\r\nNvidia released yesterday an update to torch-2.0.0 for Jetson, but the problem still persists unchanged.\r\n\r\nAs it works on your environment, it strongly hints that the problem is Jetson-platform specific. I will investigate this further.\r\n\r\nCould also be connected with issue #23413, where an issue in torch might be the problem.\r\n\r\nI will close this now, as it doesn't seem to be an issue with the transformers library.\r\n",
"Thank you very much @SteffenBauer , please let us know how it goes",
"Hey @SteffenBauer 👋 \r\n\r\nI can confirm that things seem to be fine on our end, using the latest release (v4.30). Have a look at [this notebook](https://colab.research.google.com/drive/11iSgaa6j0S9jbIMg2C9iqUTfhHEtFebq?usp=sharing) :)",
"Hi @gante ,\r\n\r\nthanks for confirming, your colab notebook also works here fine with expected result. \r\nAs I copied the example code verbatim to a python shell on the Jetson Orin, the problem now is confirmed to be on the side of the platform.\r\nI hadn't time to investigate this further, no clue so far if it caused by the ARM64 architecture, or the Nvidia provided PyTorch library. As the Nvidia Jetson platform is an important tool for DL practitioners, this should indeed be looked into more deeply.\r\n\r\nWill update here once I have results.",
"@SteffenBauer can you set the device with PyTorch's `.to()`, using Jetson? If so, you could try running the model on CPU and GPU to determine whether the problem is exclusive to a particular device or at a software level.\r\n\r\nOpening an issue with Nvidia may help too :)",
"Hi @gante ,\r\nyou might have found something! There is indeed a difference between CPU and GPU.\r\n\r\n(by the way, I just upgraded the transformer library to 4.30.2 before doing this test)\r\n\r\nI use this code, directly copied from the notebook you supplied:\r\n\r\n```python\r\nfrom transformers import OPTForCausalLM, AutoTokenizer\r\nimport torch\r\n\r\ndevice = 'cpu' # 'cuda'\r\n\r\nmodel = OPTForCausalLM.from_pretrained(\"facebook/opt-1.3b\").to(device)\r\ntokenizer = AutoTokenizer.from_pretrained(\"facebook/opt-1.3b\")\r\nprompt = \"Hey, are you consciours? Can you talk to me?\"\r\n\r\ninputs = tokenizer(prompt, return_tensors=\"pt\")\r\ngenerate_ids = model.generate(inputs.input_ids.to(device), max_length=30)\r\ntokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]\r\n```\r\n`device = 'cpu'`:\r\n \"Hey, are you consciours? Can you talk to me? it it it it it it it it it it it it it it it\"\r\n\r\n`device = 'cuda'`:\r\n\"Hey, are you consciours? Can you talk to me?\\nI'm not a conscioure, but I can talk to\"\r\n",
"@SteffenBauer On my end, using an x86 CPU, I get the same outputs as your CUDA output above. This seems to point to a PyTorch+ARM64 issue 🤔 "
] | 1,684 | 1,686 | 1,685 |
NONE
| null |
### System Info
* Platform: NVIDIA Jetson Orin 32GB development kit
* CPU `ARMv8 Processor rev 1 (v8l)`
* Linux `5.10.104-tegra #1 SMP PREEMPT Sun Mar 19 07:55:28 PDT 2023 aarch64 aarch64 aarch64 GNU/Linux`
* Python 3.8.10
* transformers 4.29.2
* torch `2.0.0a0+fe05266f.nv23.4` (compiled package for Jetson platform provided by Nvidia)
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
In my first attempts to run causalLM text generation with huggingface transformer models, I get weird and most surely wrong results with the OPT model family.
I run the official example from the huggingface OPT documentation:
https://huggingface.co/docs/transformers/model_doc/opt#transformers.OPTForCausalLM.forward.example
```
~$ python3
Python 3.8.10 (default, Mar 13 2023, 10:26:41)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import AutoTokenizer, OPTForCausalLM
>>> model = OPTForCausalLM.from_pretrained("facebook/opt-350m")
Downloading (…)lve/main/config.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████| 644/644 [00:00<00:00, 239kB/s]
Downloading pytorch_model.bin: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 663M/663M [00:51<00:00, 12.8MB/s]
Downloading (…)neration_config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 137/137 [00:00<00:00, 58.7kB/s]
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m")
Downloading (…)okenizer_config.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████| 685/685 [00:00<00:00, 600kB/s]
Downloading (…)olve/main/vocab.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████| 899k/899k [00:00<00:00, 2.33MB/s]
Downloading (…)olve/main/merges.txt: 100%|████████████████████████████████████████████████████████████████████████████████████████████████| 456k/456k [00:00<00:00, 1.61MB/s]
Downloading (…)cial_tokens_map.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████| 441/441 [00:00<00:00, 324kB/s]
>>> prompt = "Hey, are you consciours? Can you talk to me?"
>>> inputs = tokenizer(prompt, return_tensors="pt")
>>> generate_ids = model.generate(inputs.input_ids, max_length=30)
>>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
'Hey, are you consciours? Can you talk to me?<s><s><s><s><s><s><s><s><s><s><s><s><s><s><s>'
```
For OPT-1.3b I get the following, repeatedly:
```
>>> model = OPTForCausalLM.from_pretrained("facebook/opt-1.3b")
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-1.3b")
>>> inputs = tokenizer(prompt, return_tensors="pt")
>>> generate_ids = model.generate(inputs.input_ids, max_length=30)
>>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
'Hey, are you consciours? Can you talk to me? it it it it it it it it it it it it it it it'
```
Any other prompts give the same result always, just `<s>` tokens or repeated `it`.
### Expected behavior
OPT model causalLM text generation should return text that not consists of only `<s>` tokens or repeated `it`, like in the given documentation example:
```
>>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
"Hey, are you consciours? Can you talk to me?\nI'm not consciours, but I can talk to you."
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23630/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23630/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23625
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23625/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23625/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23625/events
|
https://github.com/huggingface/transformers/pull/23625
| 1,718,429,213 |
PR_kwDOCUB6oc5Q89Oe
| 23,625 |
[wip: testing doc-builder]
|
{
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@mishig25 Should be fixed by [this](https://github.com/huggingface/doc-builder/pull/373/commits/4895cfbf332994c5c5d73e83f990fe017184ecc5)",
"Rerunning the doc-build https://github.com/huggingface/transformers/actions/runs/5036878423 ",
"Re running the doc-build: https://github.com/huggingface/transformers/actions/runs/5036878423",
"Latest error message is:\r\n```\r\n<img> is a void element and cannot have children, or a closing tag\r\n```\r\nwhich appears to be an error by the person who made the docs? In the original version, did you attempt to fix those for the user?",
"Confirmed (https://huggingface.co/docs/transformers/v4.29.1/pt/index) - see trailing \"</img>\" after image:\r\n\r\n",
"I made a PR to fix the closing tag https://github.com/huggingface/transformers/pull/23646",
"Thanks a lot for [this PR](https://github.com/huggingface/transformers/pull/23625#issuecomment-1556623634).\r\n\r\nDespite the img tag was wrong before, previous doc-builds were not failing. Does it mean that a change in https://github.com/huggingface/doc-builder/pull/373 is making closing tag of img fail ?",
"Well yes since `<img></img>` is invalid, svelte complains.\r\n\r\nPreviously, the error was not being detected because </img> was being encoded as `</img>`",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23625). All of your documentation changes will be reflected on that endpoint."
] | 1,684 | 1,685 | 1,685 |
CONTRIBUTOR
| null |
testing https://github.com/huggingface/doc-builder/pull/373
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23625/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23625/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23625",
"html_url": "https://github.com/huggingface/transformers/pull/23625",
"diff_url": "https://github.com/huggingface/transformers/pull/23625.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23625.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23621
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23621/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23621/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23621/events
|
https://github.com/huggingface/transformers/pull/23621
| 1,718,377,360 |
PR_kwDOCUB6oc5Q8zUv
| 23,621 |
🌐 [i18n-KO] Translated `tasks/monocular_depth_estimation.mdx` to Korean
|
{
"login": "HanNayeoniee",
"id": 33839093,
"node_id": "MDQ6VXNlcjMzODM5MDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/33839093?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HanNayeoniee",
"html_url": "https://github.com/HanNayeoniee",
"followers_url": "https://api.github.com/users/HanNayeoniee/followers",
"following_url": "https://api.github.com/users/HanNayeoniee/following{/other_user}",
"gists_url": "https://api.github.com/users/HanNayeoniee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HanNayeoniee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HanNayeoniee/subscriptions",
"organizations_url": "https://api.github.com/users/HanNayeoniee/orgs",
"repos_url": "https://api.github.com/users/HanNayeoniee/repos",
"events_url": "https://api.github.com/users/HanNayeoniee/events{/privacy}",
"received_events_url": "https://api.github.com/users/HanNayeoniee/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
<!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으로 부탁드립니다 -->
# What does this PR do?
Translated the `tasks/monocular_depth_estimation.mdx` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
<!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
<!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
@sgugger, @ArthurZucker, @eunseojo May you please review this PR?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23621/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23621/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23621",
"html_url": "https://github.com/huggingface/transformers/pull/23621",
"diff_url": "https://github.com/huggingface/transformers/pull/23621.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23621.patch",
"merged_at": 1684850080000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23593
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23593/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23593/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23593/events
|
https://github.com/huggingface/transformers/issues/23593
| 1,718,341,283 |
I_kwDOCUB6oc5ma86j
| 23,593 |
`run_mlm.py` doesn't log perplexity to `wandb`
|
{
"login": "david-waterworth",
"id": 5028974,
"node_id": "MDQ6VXNlcjUwMjg5NzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/david-waterworth",
"html_url": "https://github.com/david-waterworth",
"followers_url": "https://api.github.com/users/david-waterworth/followers",
"following_url": "https://api.github.com/users/david-waterworth/following{/other_user}",
"gists_url": "https://api.github.com/users/david-waterworth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/david-waterworth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/david-waterworth/subscriptions",
"organizations_url": "https://api.github.com/users/david-waterworth/orgs",
"repos_url": "https://api.github.com/users/david-waterworth/repos",
"events_url": "https://api.github.com/users/david-waterworth/events{/privacy}",
"received_events_url": "https://api.github.com/users/david-waterworth/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @david-waterworth, thanks for raising this issue. \r\n\r\nThere's isn't an `eval_` prefix in front if the perplexity metric because it's not part of the key [when added to the metrics dictionary](https://github.com/huggingface/transformers/blob/2faa09530bc5d29756bddfec12037c066cc85a02/examples/pytorch/language-modeling/run_mlm.py#LL632C19-L632C19). Would you like to open a PR to update this? \r\n\r\nRegarding wandb logging, the integration logic can be [found here](https://github.com/huggingface/transformers/blob/867316670a909dd1a60ad69cdb0c962bdc6f0cd4/src/transformers/integrations.py#L663). These are community contributed and not actively maintained by hugging face. Feel free to open a PR if there's an issue you've spotted in the logging or ping the original contributors of the callback. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,688 | 1,688 |
NONE
| null |
### System Info
When I run `run_mlm.py` all of the metrics which are created with names such as `train_loss` or `eval_loss` are logged to `wandb` reformatted as `train/loss` or `eval/loss`. However the eval perplexity is simply logged as `perplexity` and is not included in the wandb metrics
https://github.com/huggingface/transformers/blob/3658488ff77ff8d45101293e749263acf437f4d5/examples/pytorch/language-modeling/run_mlm.py#LL620C1-L635C46
The following is the output from `run_mlm.py`, you can observe that only metrics with a `eval_` prefix are included. I couldn't find in the code where the wandb login integration is or I'd dig further.
``` console
***** eval metrics *****
epoch = 10.0
eval_accuracy = 0.9946
eval_loss = 0.0248
eval_runtime = 0:10:43.84
eval_samples = 62679
eval_samples_per_second = 97.351
eval_steps_per_second = 3.043
perplexity = 1.0251
wandb: Waiting for W&B process to finish... (success).
wandb: / 0.130 MB of 0.130 MB uploaded (0.000 MB deduped)
wandb: Run history:
wandb: eval/accuracy ▁
wandb: eval/loss ▁
wandb: eval/runtime ▁
wandb: eval/samples_per_second ▁
wandb: eval/steps_per_second ▁
wandb: train/epoch ▁▂▂▃▃▄▄▅▅▆▆▇▇███
wandb: train/global_step ▁▂▂▃▃▄▄▅▅▆▆▇▇███
wandb: train/learning_rate █▇▇▆▆▅▅▄▄▃▃▂▂▁
wandb: train/loss █▇▆▅▅▄▃▃▂▂▂▂▁▁
wandb: train/total_flos ▁
wandb: train/train_loss ▁
wandb: train/train_runtime ▁
wandb: train/train_samples_per_second ▁
wandb: train/train_steps_per_second ▁
wandb:
wandb: Run summary:
wandb: eval/accuracy 0.99464
wandb: eval/loss 0.02479
wandb: eval/runtime 643.8475
wandb: eval/samples_per_second 97.351
wandb: eval/steps_per_second 3.043
wandb: train/epoch 10.0
wandb: train/global_step 16540
wandb: train/learning_rate 0.0
wandb: train/loss 0.0568
wandb: train/total_flos 3.677755155323775e+18
wandb: train/train_loss 0.02581
wandb: train/train_runtime 87644.5711
wandb: train/train_samples_per_second 144.98
wandb: train/train_steps_per_second 0.189
```
I'm also not getting per-epoch eval metrics logged to wandb but I may have omitted a cmd line parameter.
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run train_mlm with wandb enabled
### Expected behavior
Log `eval/perpleixity` to `wandb`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23593/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23593/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23552
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23552/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23552/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23552/events
|
https://github.com/huggingface/transformers/issues/23552
| 1,718,308,941 |
I_kwDOCUB6oc5ma1BN
| 23,552 |
TextIteratorStreamer cannot be used with TextGenerationPipeline
|
{
"login": "grafail",
"id": 47496212,
"node_id": "MDQ6VXNlcjQ3NDk2MjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/47496212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/grafail",
"html_url": "https://github.com/grafail",
"followers_url": "https://api.github.com/users/grafail/followers",
"following_url": "https://api.github.com/users/grafail/following{/other_user}",
"gists_url": "https://api.github.com/users/grafail/gists{/gist_id}",
"starred_url": "https://api.github.com/users/grafail/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/grafail/subscriptions",
"organizations_url": "https://api.github.com/users/grafail/orgs",
"repos_url": "https://api.github.com/users/grafail/repos",
"events_url": "https://api.github.com/users/grafail/events{/privacy}",
"received_events_url": "https://api.github.com/users/grafail/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I encountered the same problem, I submitted a PR for it.\r\n\r\n",
"I am having the same issue and hope that it will be resolved soon. Thanks!",
"@gante comes back wednesday.\r\n\r\nI kind of agree a shallow copy should be enough and not a deepcopy, but since the code was deliberate there might be reasons for it. If it's ok let's wait for joao to come back.",
"Hey everyone! Apologies for the long delay 🤗 \r\n\r\nI agree with @Narsil, we can make it a shallow copy (I err toward deep copies, as I've been bitten by unexpected side-effects in the past). I believe the only modifications the object sees are the ones [here](https://github.com/huggingface/transformers/blob/535542d38d7f19c6347ad684347737a38107f148/src/transformers/pipelines/text_generation.py#L246), for which a shallow copy is enough."
] | 1,684 | 1,686 | 1,686 |
NONE
| null |
### System Info
- `transformers` version: 4.29.2
- Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.8.13
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.5.3 (cpu)
- Jax version: 0.4.10
- JaxLib version: 0.4.10
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@Narsil @gante
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
This issue occurs because the `TextIteratorStreamer` class contains a `Queue` field which cannot be pickled and the text generation pipeline runs a deepcopy .
https://github.com/huggingface/transformers/blob/3658488ff77ff8d45101293e749263acf437f4d5/src/transformers/pipelines/text_generation.py#L245
Code to reproduce issue:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer, pipeline
model = AutoModelForCausalLM.from_pretrained("gpt2")
tokenizer = AutoTokenizer.from_pretrained("gpt2")
streamer = TextIteratorStreamer(tokenizer)
pipe = pipeline(
"text-generation", model=model, tokenizer=tokenizer, streamer=streamer
)
pipe("test")
```
Trace:
```python
Traceback (most recent call last):
File "/home/raf/.pyenv/versions/3.8.13/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/home/raf/.pyenv/versions/3.8.13/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/home/raf/.pyenv/versions/develop/lib/python3.8/site-packages/transformers/pipelines/text_generation.py", line 201, in __call__
return super().__call__(text_inputs, **kwargs)
File "/home/raf/.pyenv/versions/develop/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1119, in __call__
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "/home/raf/.pyenv/versions/develop/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1126, in run_single
model_outputs = self.forward(model_inputs, **forward_params)
File "/home/raf/.pyenv/versions/develop/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1025, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "/home/raf/.pyenv/versions/develop/lib/python3.8/site-packages/transformers/pipelines/text_generation.py", line 245, in _forward
generate_kwargs = copy.deepcopy(generate_kwargs)
File "/home/raf/.pyenv/versions/3.8.13/lib/python3.8/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/home/raf/.pyenv/versions/3.8.13/lib/python3.8/copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/raf/.pyenv/versions/3.8.13/lib/python3.8/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/raf/.pyenv/versions/3.8.13/lib/python3.8/copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "/home/raf/.pyenv/versions/3.8.13/lib/python3.8/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/home/raf/.pyenv/versions/3.8.13/lib/python3.8/copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/raf/.pyenv/versions/3.8.13/lib/python3.8/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/raf/.pyenv/versions/3.8.13/lib/python3.8/copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "/home/raf/.pyenv/versions/3.8.13/lib/python3.8/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/home/raf/.pyenv/versions/3.8.13/lib/python3.8/copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/raf/.pyenv/versions/3.8.13/lib/python3.8/copy.py", line 161, in deepcopy
rv = reductor(4)
TypeError: cannot pickle '_thread.lock' object
Process finished with exit code 137 (interrupted by signal 9: SIGKILL)
```
### Expected behavior
Pipeline should run normally
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23552/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23552/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23541
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23541/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23541/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23541/events
|
https://github.com/huggingface/transformers/pull/23541
| 1,718,250,493 |
PR_kwDOCUB6oc5Q8bdB
| 23,541 |
Add type hints for PyTorch BERT.
|
{
"login": "coledie",
"id": 39167756,
"node_id": "MDQ6VXNlcjM5MTY3NzU2",
"avatar_url": "https://avatars.githubusercontent.com/u/39167756?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/coledie",
"html_url": "https://github.com/coledie",
"followers_url": "https://api.github.com/users/coledie/followers",
"following_url": "https://api.github.com/users/coledie/following{/other_user}",
"gists_url": "https://api.github.com/users/coledie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/coledie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/coledie/subscriptions",
"organizations_url": "https://api.github.com/users/coledie/orgs",
"repos_url": "https://api.github.com/users/coledie/repos",
"events_url": "https://api.github.com/users/coledie/events{/privacy}",
"received_events_url": "https://api.github.com/users/coledie/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23541). All of your documentation changes will be reflected on that endpoint.",
"Looks good! The last thing you'll need to do is `pip install transformers[quality]` followed by `make style` in the `transformers` directory. This runs our code formatting tools to make sure everything follows our style guidelines. Once you do that and commit any changes, the tests should pass!\r\n\r\nIf you run `make style` and you get an error, it may indicate some part of your code that has issues that our code style tools can't correct - if that happens, take a look and try to see what's wrong, and reply here if you can't figure it out!",
"It looks like I've got one last error that I cannot figure out, the relevant line of code appears several times in many files without issue otherwise. Could you help with this? ",
"@Rocketknight1 ",
"Ah, thanks for the ping! Investigating now",
"I've investigated and there's an issue in our copy checking code, specifically the `is_copy_consistent` function. This isn't your fault, and I'll need to file a PR to fix it!\r\n\r\n(For internal `transformers` reference): The issue is triggered when a function is copied from with a single-line header, and there is a change that causes its header to now be multi-line (e.g. adding type hints and causing `black` to wrap the line). The `is_copy_consistent` function [builds a replacement function](https://github.com/huggingface/transformers/blob/main/utils/check_copies.py#L238) from the first line of the target function followed by the subsequent lines of the original function, which creates a mangled header if the original header has changed to multi-line but the target has not:\r\n\r\n```python\r\ndef build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):\r\n self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None\r\n ) -> List[int]:\r\n```\r\n\r\n@sgugger do you want me to make a PR to fix it?",
"Lol no, fixing this is really not a priority. The copies can be manually updated.\r\nThe only situation this can appear is this one, and it's rare enough that we can deal with it I think.",
"Understood! @coledie you might have to do some manual copying to make this work, in that case. Search the repository for the string `# Copied from transformers.models.bert.tokenization_bert_fast.BertTokenizerFast.build_inputs_with_special_tokens`. This will locate functions that are copied from the BERT function and that our repository tools keep in sync with it. If you manually copy your new `build_inputs_with_special_tokens` header over the existing headers in those functions and then rerun `make fixup` or `make fix-copies`, everything should work and the CI should pass.\r\n\r\nIf you have any issues, let me know and I can make the changes for you!",
"@Rocketknight1 Looks like it is working since test failures are unrelated?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,691 | 1,691 |
NONE
| null |
# What does this PR do?
Add type hints for PyTorch BERT.
Fixes #16059 for PyTorch BERT
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- text models: @ArthurZucker and @younesbelkada
- type hints: @Rocketknight1
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23541/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23541/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23541",
"html_url": "https://github.com/huggingface/transformers/pull/23541",
"diff_url": "https://github.com/huggingface/transformers/pull/23541.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23541.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23538
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23538/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23538/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23538/events
|
https://github.com/huggingface/transformers/pull/23538
| 1,718,227,010 |
PR_kwDOCUB6oc5Q8XBm
| 23,538 |
Fix tensor device while attention_mask is not None
|
{
"login": "zspo",
"id": 26846598,
"node_id": "MDQ6VXNlcjI2ODQ2NTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/26846598?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zspo",
"html_url": "https://github.com/zspo",
"followers_url": "https://api.github.com/users/zspo/followers",
"following_url": "https://api.github.com/users/zspo/following{/other_user}",
"gists_url": "https://api.github.com/users/zspo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zspo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zspo/subscriptions",
"organizations_url": "https://api.github.com/users/zspo/orgs",
"repos_url": "https://api.github.com/users/zspo/repos",
"events_url": "https://api.github.com/users/zspo/events{/privacy}",
"received_events_url": "https://api.github.com/users/zspo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
The PR while fix the tensor device while attention_mask is not None, the tensor like `torch.tensor(torch.finfo(attn_weights.dtype).min)`.
1. I don't use the `os.environ[“CUDA_VISIBLE_DEVICES”]` because i want to load other models in the same script.
2. Use torch.cuda.set_device(6) while other gpu device has already occupied.
3. The model is loaded in device `cuda:6`, but the new tensor will be load in `cuda:0` while attention_mask is not None.
```
{
'input_ids': tensor([[ 1, 29871, 30919]], device='cuda:6'),
'attention_mask': tensor([[1, 1, 1]], device='cuda:6')
}
input_ids device: cuda:6
model device: cuda:6
File "/data1/env/miniconda3/envs/llmdev/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 231, inforward
attn_weights = torch.max(attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min))
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:6 and cuda:0!
```
```
print('attn_weights device', torch.tensor(torch.finfo(attn_weights.dtype).min).device)
attn_weights device cuda:0
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker and @younesbelkada and @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23538/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23538",
"html_url": "https://github.com/huggingface/transformers/pull/23538",
"diff_url": "https://github.com/huggingface/transformers/pull/23538.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23538.patch",
"merged_at": 1684762246000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23535
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23535/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23535/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23535/events
|
https://github.com/huggingface/transformers/pull/23535
| 1,718,223,200 |
PR_kwDOCUB6oc5Q8WUC
| 23,535 |
Bugfix: LLaMA layer norm incorrectly changes input type and consumers lots of memory
|
{
"login": "TimDettmers",
"id": 5260050,
"node_id": "MDQ6VXNlcjUyNjAwNTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5260050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TimDettmers",
"html_url": "https://github.com/TimDettmers",
"followers_url": "https://api.github.com/users/TimDettmers/followers",
"following_url": "https://api.github.com/users/TimDettmers/following{/other_user}",
"gists_url": "https://api.github.com/users/TimDettmers/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TimDettmers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TimDettmers/subscriptions",
"organizations_url": "https://api.github.com/users/TimDettmers/orgs",
"repos_url": "https://api.github.com/users/TimDettmers/repos",
"events_url": "https://api.github.com/users/TimDettmers/events{/privacy}",
"received_events_url": "https://api.github.com/users/TimDettmers/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> but for all remaining its `float32` this increases the overall memory-footprint of finetuning considerably.\r\n\r\n...how bad was this? like 30% worse?",
"Please merge ^_^",
"I am assuming this does not impact 8bit training as I noticed no change in memory.",
"@official-elinas did an experiment with transformers==4.28.1 vs the code added here, and was unable to reproduce any speed/memory gains\r\n\r\nhttps://wandb.ai/officialelinas/tests?workspace=user-officialelinas\r\n\r\n\r\n\r\n\r\n@TimDettmers what do you think we did wrong? are there any experiments you can share regarding this?\r\n",
"I think unless you've manually set RMSNorm layer's parameters to fp32 while the rest of your model is in fp16/bf16 you won't see any change.\r\n\r\nI'm curious if as a design we should instead allow to specify `dtype` in the layer arguments? `F.softmax` has a dtype argument that allows one to control such a thing more finely? https://pytorch.org/docs/stable/generated/torch.nn.functional.softmax.html"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
It is a common setup to run LLaMA in `bfloat16` or `float16` while the `RMSNorm` layers are in `float32`. The current implementation converts an input into `float32` but never converts it back to the original input type. This means in the first layer the input type if `bfloat16` but for all remaining its `float32` this increases the overall memory-footprint of finetuning considerably.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@younesbelkada @sgugger
Please also see: https://github.com/huggingface/transformers/pull/23479
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23535/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23535/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23535",
"html_url": "https://github.com/huggingface/transformers/pull/23535",
"diff_url": "https://github.com/huggingface/transformers/pull/23535.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23535.patch",
"merged_at": 1684772439000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23534
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23534/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23534/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23534/events
|
https://github.com/huggingface/transformers/pull/23534
| 1,718,222,919 |
PR_kwDOCUB6oc5Q8WQ7
| 23,534 |
Fix tensor device while attention_mask is not None
|
{
"login": "zspo",
"id": 26846598,
"node_id": "MDQ6VXNlcjI2ODQ2NTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/26846598?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zspo",
"html_url": "https://github.com/zspo",
"followers_url": "https://api.github.com/users/zspo/followers",
"following_url": "https://api.github.com/users/zspo/following{/other_user}",
"gists_url": "https://api.github.com/users/zspo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zspo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zspo/subscriptions",
"organizations_url": "https://api.github.com/users/zspo/orgs",
"repos_url": "https://api.github.com/users/zspo/repos",
"events_url": "https://api.github.com/users/zspo/events{/privacy}",
"received_events_url": "https://api.github.com/users/zspo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23534). All of your documentation changes will be reflected on that endpoint."
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
The PR while fix the tensor device while attention_mask is not None, the tensor like `torch.tensor(torch.finfo(attn_weights.dtype).min)`.
1. I don't use the `os.environ[“CUDA_VISIBLE_DEVICES”]` because i want to load other models in the same script.
2. Use `torch.cuda.set_device(6)` while other gpu device has already occupied.
3. The model is loaded in device `cuda:6`, but the new tensor will be load in `cuda:0` while attention_mask is not None.
```
{
'input_ids': tensor([[...]], device='cuda:6'),
'attention_mask': tensor([[...]], device='cuda:6')
}
input_ids device: cuda:6
model device: cuda:6
print('attn_weights device', torch.tensor(torch.finfo(attn_weights.dtype).min).device)
attn_weights device cuda:0
```
```
File "/data1/env/miniconda3/envs/llmdev/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 231, inforward
attn_weights = torch.max(attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min))
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:6 and cuda:0!
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker and @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23534/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23534/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23534",
"html_url": "https://github.com/huggingface/transformers/pull/23534",
"diff_url": "https://github.com/huggingface/transformers/pull/23534.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23534.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23530
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23530/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23530/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23530/events
|
https://github.com/huggingface/transformers/issues/23530
| 1,718,194,319 |
I_kwDOCUB6oc5maZCP
| 23,530 |
Incorrect handling of EOS tokens in DataCollatorForLanguageModeling when pad_token is set to eos_token
|
{
"login": "PatrykNeubauer",
"id": 62596795,
"node_id": "MDQ6VXNlcjYyNTk2Nzk1",
"avatar_url": "https://avatars.githubusercontent.com/u/62596795?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PatrykNeubauer",
"html_url": "https://github.com/PatrykNeubauer",
"followers_url": "https://api.github.com/users/PatrykNeubauer/followers",
"following_url": "https://api.github.com/users/PatrykNeubauer/following{/other_user}",
"gists_url": "https://api.github.com/users/PatrykNeubauer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PatrykNeubauer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PatrykNeubauer/subscriptions",
"organizations_url": "https://api.github.com/users/PatrykNeubauer/orgs",
"repos_url": "https://api.github.com/users/PatrykNeubauer/repos",
"events_url": "https://api.github.com/users/PatrykNeubauer/events{/privacy}",
"received_events_url": "https://api.github.com/users/PatrykNeubauer/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"You can do the change on your local fork for your example (or you could use a different token ID for the EOS and padding), but this is the right behavior in `DataCollatorForLanguageModeling`: labels corresponding to the pad token ID should be ignored in the loss computation and are thus set to -100.",
"Makes sense, but then perhaps that part of the course - setting `pad_token ` to `eos_token` - might be misleading? ",
"Ran into a similar issue as @PatrykNeubauer. Although it might be the \"right behavior\", it is pretty easy to make a mistake if you want to have `<|endoftext|>` be a valid predicted token at the end of your text and also follow the suggestion of setting `pad_token = eos_token`.\r\n\r\nI think there might be a few fairly simple, valid approaches to solve this problem:\r\n1. Do what @PatrykNeubauer suggested about using the attention mask. Presumably, if someone passes a pad token with a non-zero attention mask they did this themselves on purpose. I believe this wouldn't change any behavior when following the standard workflow.\r\n2. If `labels` is already defined in the `examples` passed to `torch_call` use these labels for the non-padded `input_ids` in `examples`. Then for each padding token added to `input_ids` add -100 to the labels to pad in the same way.\r\n3. Add an `add_eos_token` argument to `DataCollatorForLanguageModeling` which will add an `eos_token` to the end of final padded output with the correct eos token label. This way we can keep padding the same way but still allow `eos_token` to be added easily and treated correctly. I think outside of this current issue it would also be a nice addition to have an easy way to automatically add the eos token.\r\n\r\n@sgugger as far as I can tell there isn't a great way to (1) do dynamic padding and (2) use a causal language model with an end-of-sequence token. Generally causal models don't have a padding token so we need to use another token. As pretty much every model has an end-of-sequence token it is a natural choice, but then we run into the above issue. I also tried using `DataCollatorWithPadding` but it seems to have an issue with including `labels`, which results in an error (I can give more details if you like).\r\n\r\nI think the best approach that currently works is what you suggested, setting pad to a token other than eos, but this seems sort of hacky as we can't assume some piece of text will be a single token for all causal language model tokenizers. This method is also not mentioned in any tutorial or documentation as far as I can tell which makes it seem like this isn't often used and perhaps is thought to be inelegant. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,689 | 1,689 |
NONE
| null |
### System Info
Doesn't seem to be version specific, but:
- `transformers` version: 4.29.2
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu118 (False)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (cpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger I think?
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When using `DataCollatorForLanguageModeling` for CLM with `pad_token` set to `eos_token` as it's shown [here](https://huggingface.co/docs/transformers/tasks/language_modeling ) - all EOS tokens in the labels are overwritten with -100, instead of just the ones used for padding.
In colab [here](https://colab.research.google.com/drive/13JsslKctbc9JEWbsJEJ6xru4zm6QxmLt?usp=sharing).
1. Prepare sentences that should be included in a single batch with explicit EOS tokens. (here for GPT-2)
```python
sentences = ["Short sentence<|endoftext|>", "Now that's a longer sentence<|endoftext|>"]
```
2. Tokenize them.
```python
tokenizer = AutoTokenizer.from_pretrained('gpt2')
tokenized_sentences = [tokenizer(sentence) for sentence in sentences]
```
3. Collate them with `DataCollatorForLanguageModeling`.
```python
tokenizer.pad_token = tokenizer.eos_token # 50256
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)
batch = data_collator(tokenized_sentences)
```
Batch is:
```python
{'input_ids': tensor([[16438, 6827, 50256, 50256, 50256, 50256, 50256],
[3844, 326, 338, 257, 2392, 6827, 50256]]),
'attention_mask': tensor([[1, 1, 1, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1]]),
'labels': tensor([[16438, 6827, -100, -100, -100, -100, -100],
[3844, 326, 338, 257, 2392, 6827, -100]])}
```
Notice how even though `attention_mask` properly has `1` for the EOS tokens, they still got set to `-100`.
### Expected behavior
I'd expect that only the EOS tokens added as padding should be set to -100 in the labels, resulting in the following batch:
```python
{'input_ids': tensor([[16438, 6827, 50256, 50256, 50256, 50256, 50256],
[3844, 326, 338, 257, 2392, 6827, 50256]]),
'attention_mask': tensor([[1, 1, 1, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1]]),
'labels': tensor([[16438, 6827, 50256, -100, -100, -100, -100],
[3844, 326, 338, 257, 2392, 6827, 50256]])}
```
(notice the `50256` in labels)
I wanted to fine-tune GPT-2 for a rather specific use-case with very short texts, so I've added EOS to my samples and it took me quite a bit of time to spot what was the issue, causing the model to fail at generating short texts. After I spotted it, as a workaround I just set labels manually and use the standard collator instead, which does what I wanted.
I feel like this is a simple change [here](https://github.com/huggingface/transformers/blob/v4.24.0/src/transformers/data/data_collator.py#L738), to use the attention mask instead of `[labels == self.tokenizer.pad_token_id]`.
I'd like to make a PR with that, but I just want to make sure that this is indeed a bug, and not expected behaviour.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23530/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23530/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23525
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23525/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23525/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23525/events
|
https://github.com/huggingface/transformers/issues/23525
| 1,718,178,287 |
I_kwDOCUB6oc5maVHv
| 23,525 |
In MlflowCallback, Can not reattach to an existing
|
{
"login": "ykihong0",
"id": 23263289,
"node_id": "MDQ6VXNlcjIzMjYzMjg5",
"avatar_url": "https://avatars.githubusercontent.com/u/23263289?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ykihong0",
"html_url": "https://github.com/ykihong0",
"followers_url": "https://api.github.com/users/ykihong0/followers",
"following_url": "https://api.github.com/users/ykihong0/following{/other_user}",
"gists_url": "https://api.github.com/users/ykihong0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ykihong0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ykihong0/subscriptions",
"organizations_url": "https://api.github.com/users/ykihong0/orgs",
"repos_url": "https://api.github.com/users/ykihong0/repos",
"events_url": "https://api.github.com/users/ykihong0/events{/privacy}",
"received_events_url": "https://api.github.com/users/ykihong0/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @ykihong0, thanks for reporting this issue. \r\n\r\nThe integrations are community added and maintained by their authors - Pinging @orieg who I believe added the env var logic in #17130 :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,687 | 1,687 |
NONE
| null |
# In Documentation
[Doc](https://huggingface.co/docs/transformers/main_classes/callback#transformers.integrations.MLflowCallback) said we can reattach to an existing run by set **MLFLOW_RUN_ID** environment variable.
# In Code
But [Code](https://github.com/huggingface/transformers/blob/118e9810687dd713b6be07af79e80eeb1d916908/src/transformers/integrations.py#L990) seems making new mlflow run with **args.run_name**.
# ASIS
- code at https://github.com/huggingface/transformers/blob/118e9810687dd713b6be07af79e80eeb1d916908/src/transformers/integrations.py#L990
```
self._ml_flow.start_run(run_name=args.run_name, nested=self._nested_run)
```
# TOBE
```
self._ml_flow.start_run(run_id=self._run_id, nested=self._nested_run)
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23525/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23525/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23516
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23516/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23516/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23516/events
|
https://github.com/huggingface/transformers/issues/23516
| 1,718,171,308 |
I_kwDOCUB6oc5maTas
| 23,516 |
KeyphraseExtractionPipeline - Postprocessor - TypeError: postprocess() got an unexpected keyword argument 'all_outputs'
|
{
"login": "eboraks",
"id": 25820920,
"node_id": "MDQ6VXNlcjI1ODIwOTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/25820920?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eboraks",
"html_url": "https://github.com/eboraks",
"followers_url": "https://api.github.com/users/eboraks/followers",
"following_url": "https://api.github.com/users/eboraks/following{/other_user}",
"gists_url": "https://api.github.com/users/eboraks/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eboraks/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eboraks/subscriptions",
"organizations_url": "https://api.github.com/users/eboraks/orgs",
"repos_url": "https://api.github.com/users/eboraks/repos",
"events_url": "https://api.github.com/users/eboraks/events{/privacy}",
"received_events_url": "https://api.github.com/users/eboraks/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hello @eboraks,\r\n\r\nSince the release of Hugging Face 4.28.0, the `postprocess()` method has changed to handle unlimited length of text. Therefore, `all_outputs` can only be used if `transformers>=4.28.0`. I recommend updating transformers since you are using `transformers==4.20.1`.\r\n\r\nIf you can't upgrade to at least that specific version, renaming `all_outputs` to `model_outputs` should solve your problem:\r\n```\r\ndef postprocess(self, model_outputs):\r\n results = super().postprocess(\r\n model_outputs=model_outputs,\r\n aggregation_strategy=AggregationStrategy.FIRST,\r\n )\r\n return np.unique([result.get(\"word\").strip() for result in results])\r\n```\r\n\r\nHave a good day",
"Thank you @luccailliau upgrading to transformers 4.28 solved the issue. "
] | 1,684 | 1,684 | 1,684 |
NONE
| null |
### System Info
transformers version: 4.20.1
Platform: (GNU/Linux 5.15.90.1-microsoft-standard-WSL2 x86_64
Python version: 3.9.16
PyTorch version (GPU?): 2.0.0+cu118
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using GPU in script?: No
Using distributed or parallel set-up in script?: No
### Who can help?
I am trying to use a keyphrase extractor as described here - https://huggingface.co/ml6team/keyphrase-extraction-distilbert-inspec
and I am getting the following error. I am assuming "all_outputs" parameter is deprecated, the question what should I change it with?
@sgugger @luccailliau
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Copy and paste the code from [model](https://huggingface.co/ml6team/keyphrase-extraction-distilbert-inspec) into a Python file
Here is the code.
```
from transformers import (
TokenClassificationPipeline,
AutoModelForTokenClassification,
AutoTokenizer,
)
from transformers.pipelines import AggregationStrategy
import numpy as np
# Define keyphrase extraction pipeline
class KeyphraseExtractionPipeline(TokenClassificationPipeline):
def __init__(self, model, *args, **kwargs):
super().__init__(
model=AutoModelForTokenClassification.from_pretrained(model),
tokenizer=AutoTokenizer.from_pretrained(model),
*args,
**kwargs
)
def postprocess(self, all_outputs):
results = super().postprocess(
all_outputs=all_outputs,
aggregation_strategy=AggregationStrategy.FIRST,
)
return np.unique([result.get("word").strip() for result in results])
# Load pipeline
model_name = "ml6team/keyphrase-extraction-distilbert-inspec"
extractor = KeyphraseExtractionPipeline(model=model_name)
# Inference
text = """
Keyphrase extraction is a technique in text analysis where you extract the
important keyphrases from a document. Thanks to these keyphrases humans can
understand the content of a text very quickly and easily without reading it
completely. Keyphrase extraction was first done primarily by human annotators,
who read the text in detail and then wrote down the most important keyphrases.
The disadvantage is that if you work with a lot of documents, this process
can take a lot of time.
Here is where Artificial Intelligence comes in. Currently, classical machine
learning methods, that use statistical and linguistic features, are widely used
for the extraction process. Now with deep learning, it is possible to capture
the semantic meaning of a text even better than these classical methods.
Classical methods look at the frequency, occurrence and order of words
in the text, whereas these neural approaches can capture long-term
semantic dependencies and context of words in a text.
""".replace("\n", " ")
keyphrases = extractor(text)
print(keyphrases)
```
When running it, I am getting the following error
```
(hugface) eboraks@mittenwood:~/Projects/studynotes/notebooks$ python keyphrase.py
Traceback (most recent call last):
File "/home/eboraks/Projects/studynotes/notebooks/keyphrase.py", line 53, in <module>
keyphrases = extractor(text)
File "/home/eboraks/anaconda3/envs/hugface/lib/python3.9/site-packages/transformers/pipelines/token_classification.py", line 191, in __call__
return super().__call__(inputs, **kwargs)
File "/home/eboraks/anaconda3/envs/hugface/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1043, in __call__
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "/home/eboraks/anaconda3/envs/hugface/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1051, in run_single
outputs = self.postprocess(model_outputs, **postprocess_params)
File "/home/eboraks/Projects/studynotes/notebooks/keyphrase.py", line 22, in postprocess
results = super().postprocess(
TypeError: postprocess() got an unexpected keyword argument 'all_outputs'
(hugface) eboraks@mittenwood:~/Projects/studynotes/notebooks$
```
### Expected behavior
# Output
['artificial intelligence' 'classical machine learning' 'deep learning'
'keyphrase extraction' 'linguistic features' 'statistical'
'text analysis']
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23516/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23516/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23485
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23485/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23485/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23485/events
|
https://github.com/huggingface/transformers/pull/23485
| 1,718,048,980 |
PR_kwDOCUB6oc5Q71dK
| 23,485 |
Fix `tests/repo_utils/test_get_test_info.py`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23485). All of your documentation changes will be reflected on that endpoint.",
"Thanks so much for taking care of this @ydshieh 🔥 !"
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
3 tests break on `main` after #23153 (see [here](https://app.circleci.com/pipelines/github/huggingface/transformers/64922/workflows/d53f3351-a1d1-4df5-9c8b-28c45fff4f09/jobs/805142)), but can't blame @younesbelkada as the tests are not triggered on PR CI, neither after being merged. But on nightly CircleCI run, failures are detected.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23485/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23485/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23485",
"html_url": "https://github.com/huggingface/transformers/pull/23485",
"diff_url": "https://github.com/huggingface/transformers/pull/23485.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23485.patch",
"merged_at": 1684558391000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23484
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23484/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23484/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23484/events
|
https://github.com/huggingface/transformers/pull/23484
| 1,717,978,759 |
PR_kwDOCUB6oc5Q7mRb
| 23,484 |
Add LlamaIndex to awesome-transformers.md
|
{
"login": "ravi03071991",
"id": 12198101,
"node_id": "MDQ6VXNlcjEyMTk4MTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/12198101?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ravi03071991",
"html_url": "https://github.com/ravi03071991",
"followers_url": "https://api.github.com/users/ravi03071991/followers",
"following_url": "https://api.github.com/users/ravi03071991/following{/other_user}",
"gists_url": "https://api.github.com/users/ravi03071991/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ravi03071991/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ravi03071991/subscriptions",
"organizations_url": "https://api.github.com/users/ravi03071991/orgs",
"repos_url": "https://api.github.com/users/ravi03071991/repos",
"events_url": "https://api.github.com/users/ravi03071991/events{/privacy}",
"received_events_url": "https://api.github.com/users/ravi03071991/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Interestingly, you have `transformers` as an optional dependency when the Python version is above 3.9. It seems to be used quite a bit for tokenizing/streaming/image captioning etc. \r\n\r\nIf you'd rather not have it as a main dependency, I'd love to hear why so that we may improve! We've been trying to keep the package on the lighter side of dependencies so that it wasn't too heavy to add.\r\n\r\nThank you!"
] | 1,684 | 1,685 | 1,685 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds `LlamaIndex` to `awesome-transformers.md`. `LlamaIndex` is a project that provides a central interface to connect your LLM's with external data.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23484/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23484/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23484",
"html_url": "https://github.com/huggingface/transformers/pull/23484",
"diff_url": "https://github.com/huggingface/transformers/pull/23484.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23484.patch",
"merged_at": 1685021711000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23483
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23483/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23483/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23483/events
|
https://github.com/huggingface/transformers/pull/23483
| 1,717,928,013 |
PR_kwDOCUB6oc5Q7cBS
| 23,483 |
changing the requirements to a cpu torch version that works
|
{
"login": "sshahrokhi",
"id": 7341711,
"node_id": "MDQ6VXNlcjczNDE3MTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7341711?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshahrokhi",
"html_url": "https://github.com/sshahrokhi",
"followers_url": "https://api.github.com/users/sshahrokhi/followers",
"following_url": "https://api.github.com/users/sshahrokhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sshahrokhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshahrokhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshahrokhi/subscriptions",
"organizations_url": "https://api.github.com/users/sshahrokhi/orgs",
"repos_url": "https://api.github.com/users/sshahrokhi/repos",
"events_url": "https://api.github.com/users/sshahrokhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshahrokhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
requirements.txt is changed, because the versions would have produced errors. more info on the issue below.
Fixes https://github.com/huggingface/transformers/issues/23418
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23483/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23483/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23483",
"html_url": "https://github.com/huggingface/transformers/pull/23483",
"diff_url": "https://github.com/huggingface/transformers/pull/23483.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23483.patch",
"merged_at": 1684774735000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23482
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23482/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23482/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23482/events
|
https://github.com/huggingface/transformers/issues/23482
| 1,717,824,293 |
I_kwDOCUB6oc5mY-sl
| 23,482 |
[Bart] Bart model families’ embedding shape?
|
{
"login": "com3dian",
"id": 57277626,
"node_id": "MDQ6VXNlcjU3Mjc3NjI2",
"avatar_url": "https://avatars.githubusercontent.com/u/57277626?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/com3dian",
"html_url": "https://github.com/com3dian",
"followers_url": "https://api.github.com/users/com3dian/followers",
"following_url": "https://api.github.com/users/com3dian/following{/other_user}",
"gists_url": "https://api.github.com/users/com3dian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/com3dian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/com3dian/subscriptions",
"organizations_url": "https://api.github.com/users/com3dian/orgs",
"repos_url": "https://api.github.com/users/com3dian/repos",
"events_url": "https://api.github.com/users/com3dian/events{/privacy}",
"received_events_url": "https://api.github.com/users/com3dian/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @com3dian, thanks for raising this issue. \r\n\r\nThe reason the values are different is because the vocab size in the respective model configurations is different: \r\n* [50265 for `facebook/bart-large`](https://huggingface.co/facebook/bart-large/blob/cb48c1365bd826bd521f650dc2e0940aee54720c/config.json#L71)\r\n* [50264 for `facebook/bart-large-cnn`](https://huggingface.co/facebook/bart-large-cnn/blob/3d224934c6541b2b9147e023c2f6f6fe49bd27e1/config.json#L67)\r\n\r\nI'll let @ArthurZucker and @younesbelkada handle whether this is expected :) ",
"Thanks @amyeroberts !\r\n\r\nI found this interesting difference when I was fine-tuning the Bart model for both summarization and other language generation tasks, which will be combined into a complete pipeline. My intention is to utilize the [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn/) model specifically for summarization and the [facebook/bart-large](https://huggingface.co/facebook/bart-large/) model for the other tasks. It would be helpful to identify the missing token in the embedding layer of the [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn/) model to ensure alignment of the embedding sizes.",
"Hey!\r\nBoth tokenizers have the same length, 50265. The last token is `{\"id\":50264,\"special\":true,\"content\":\"<mask>\",\"single_word\":false,\"lstrip\":true,\"rstrip\":false,\"normalized\":true}`. The missing token is the `mask`.\r\nSince the model was finetuned for text-generation (summarization), I think this is expected.\r\n\r\nTell me if this does not answer your question!",
"Thanks @ArthurZucker, your reply helps a lot!"
] | 1,684 | 1,684 | 1,684 |
NONE
| null |
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.15.0-71-generic-x86_64-with-glibc2.31
- Python version: 3.9.12
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.0+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I’m implementing a finetuned Bart model for summarization, therefore I’m making decisions between using the ‘facebook/bart-large’ or the ‘facebook/bart-large-cnn’. But when I look into the layers in both models, I found the shapes of their embedding layers are different, is this a special trick?
Code to repeat
```python
from transformers import BartTokenizer, BartModel, BartForConditionalGeneration
BARTmodel = BartModel.from_pretrained('facebook/bart-large')
CGmodel = BartForConditionalGeneration.from_pretrained('facebook/bart-large-cnn')
BARTmodel.shared
-----------
Embedding(50265, 1024, padding_idx=1)
```
```python
CGmodel.model.shared
-----------
Embedding(50264, 1024, padding_idx=1)
```
### Expected behavior
I expect the embedding layer's dimension are equal.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23482/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23482/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23481
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23481/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23481/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23481/events
|
https://github.com/huggingface/transformers/pull/23481
| 1,717,764,940 |
PR_kwDOCUB6oc5Q643s
| 23,481 |
add use_orig_params to fsdp with torch compile
|
{
"login": "ouhenio",
"id": 13739349,
"node_id": "MDQ6VXNlcjEzNzM5MzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/13739349?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ouhenio",
"html_url": "https://github.com/ouhenio",
"followers_url": "https://api.github.com/users/ouhenio/followers",
"following_url": "https://api.github.com/users/ouhenio/following{/other_user}",
"gists_url": "https://api.github.com/users/ouhenio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ouhenio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ouhenio/subscriptions",
"organizations_url": "https://api.github.com/users/ouhenio/orgs",
"repos_url": "https://api.github.com/users/ouhenio/repos",
"events_url": "https://api.github.com/users/ouhenio/events{/privacy}",
"received_events_url": "https://api.github.com/users/ouhenio/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23481). All of your documentation changes will be reflected on that endpoint.",
"Hello, as the Accelerate now Powers Trainer, please use the accelerate launcher with Trainer for FSDP. That provides support for `use_orig_params` and hence this PR is no longer required.\r\n\r\nPlease do pip install git+https://github.com/huggingface/transformers and pip install git+https://github.com/huggingface/accelerate\r\n\r\nUse Accelerate launcher with Trainer. More info here: [Using Accelerate Launcher with Trainer](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#using-accelerate-launcher-with-trainer). \r\n\r\nTherefore, closing this PR. Thank you for all the effort!"
] | 1,684 | 1,686 | 1,686 |
NONE
| null |
# What does this PR do?
The FSDP wrapper inside `trainer.py` needs to be initialized with `use_orig_params=True` for FSDP + `torch.compile` to work well together. Therefore, I added some code to check inside the relevant FSDP section if `torch_compile` is set to `True` to then add `use_orig_params` with the corresponding value to FSDP.
Fixes #23341
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
@pacman100
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23481/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23481/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23481",
"html_url": "https://github.com/huggingface/transformers/pull/23481",
"diff_url": "https://github.com/huggingface/transformers/pull/23481.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23481.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23480
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23480/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23480/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23480/events
|
https://github.com/huggingface/transformers/issues/23480
| 1,717,668,704 |
I_kwDOCUB6oc5mYYtg
| 23,480 |
SpeechT5 cannot read numbers
|
{
"login": "jeromemassot",
"id": 20254310,
"node_id": "MDQ6VXNlcjIwMjU0MzEw",
"avatar_url": "https://avatars.githubusercontent.com/u/20254310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jeromemassot",
"html_url": "https://github.com/jeromemassot",
"followers_url": "https://api.github.com/users/jeromemassot/followers",
"following_url": "https://api.github.com/users/jeromemassot/following{/other_user}",
"gists_url": "https://api.github.com/users/jeromemassot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jeromemassot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeromemassot/subscriptions",
"organizations_url": "https://api.github.com/users/jeromemassot/orgs",
"repos_url": "https://api.github.com/users/jeromemassot/repos",
"events_url": "https://api.github.com/users/jeromemassot/events{/privacy}",
"received_events_url": "https://api.github.com/users/jeromemassot/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
},
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] |
closed
| false | null |
[] |
[
"The SpeechT5 tokenizer does not understand numerals. For this to work, the input text should be normalized to \"More than ten people ...\", with the number spelled out. My guess is that the agent doesn't do this.",
"Thanks for the reply. It should not be too difficult to ask the LLM to process the text in order to replace all numbers by their litteral equivalents. I will see the agent code to propose a fix.",
"Since the mapping from numbers -> words is deterministic, we could add this as a pre-processing step in the SpeechT5 tokenizer? E.g. the number \"150\" always gets mapped to \"one-hundred and fifty\". IMO this is a pretty easy way of guaranteeing that we have our text formatted as the model expects\r\n\r\nWe do the opposite in Whisper, where we normalise all spoken words to numbers (\"one-hundred and fifty\" -> \"150\"), see\r\nhttps://github.com/huggingface/transformers/blob/fe34486f129d47abc0dddb39a22b24cdbe52dec8/src/transformers/models/whisper/english_normalizer.py#L110",
"May be just asking in the prompt of the agent that a pre-processing of input text is needed and should be done by the LLM itself before to be sent to Speech T5. The OpenAI LLM used by the agent by default can do the job, pretty sure about it.",
"That would fix the issue of SpeechT5 in the context of using transformers agents, but not as a standalone model! E.g. if we wanted to use SpeechT5 according to the example docs or blog post: https://huggingface.co/blog/speecht5#text-to-speech, then passing a numeric value (e.g. \"150\") still wouldn't work\r\n\r\nMy thinking is that we can add a normalisation argument to the `processor`/`tokenizer` that handles this for us:\r\n```python\r\ninputs = processor(text=\"This is a number, 150\", return_tensors=\"pt\", normalize=True)\r\n```\r\n\r\nWhich under the hood converts the text from:\r\n```\r\nThis is a number, 150\r\n```\r\nto\r\n```\r\nThis is a number, one-hundred and fifty\r\n```\r\n\r\nOnce we have this, it's trivial to update the transformers agents pipeline to switch on normalisation by default",
"Yeap I agree that the agent-only solution is not the optimal one. The processor is certainly a better way to fix it.",
"Keeping this open since I think it would be a valuable addition (especially since SpeechT5 is being used in transformers agents) - would you like to have a go at adding such a pre-processing function @jeromemassot? Happy to help you with the integration!",
"@sanchit-gandhi you predicted well. I think that it is important to include this pre-processing to bypass the issue of numbers recognition. 🙏",
"Perfect - do you want to open a PR for this? We can reverse the logic that we use in the Whisper normaliser, where we go from written to numeric (see https://github.com/huggingface/transformers/issues/23480#issuecomment-1557526685)",
"I sadly won't have time to undertake this PR myself, but maintain that it's a worthwhile update to the library. If anyone in the community would like to pick-up this issue and submit a PR I'd be more than happy to guide you through the integration process and answer any questions. Much of what's needed to be done is outlined in this comment thread already!",
"Hi @sanchit-gandhi, if no one else has taken this up, I would love to fix this issue in a new PR!",
"Hey @heytanay - thanks for jumping on here, it's all yours! Feel free to open a PR and tag me - happy to assist with the integration! Think the details of how we can do this are more or less detailed in this thread, but let me know if you have any questions",
"Thanks, @sanchit-gandhi, I have started working on it. \r\n\r\nQuick question:\r\nDo we need to reverse the entire `EnglishNumberNormalizer` class from the [`models/whisper/english_normalizer.py`](https://github.com/huggingface/transformers/blob/fe34486f129d47abc0dddb39a22b24cdbe52dec8/src/transformers/models/whisper/english_normalizer.py#L94) that you referenced earlier, or can some cases be skipped?\r\n\r\n",
"Awesome - thanks for the update @heytanay! I don't think it's strictly necessary to include all the cases there - let's try and cover the main ones though so that the normaliser is fairly generalisable. In the PR description you can include a list of all the cases you kept, and the ones you discarded, and we can decide together whether we think it's exhaustive enough! Having had a quick scan, I think it's worth keeping at least all the number formatting from the normaliser: https://github.com/huggingface/transformers/blob/fe34486f129d47abc0dddb39a22b24cdbe52dec8/src/transformers/models/whisper/english_normalizer.py#L108-L167",
"I have created a draft PR @sanchit-gandhi: #25447"
] | 1,684 | 1,692 | 1,692 |
NONE
| null |
### System Info
transformers == 4.29.0
environment = Colab
Python == 3.10.11
tensorflow == 2.12.0
torch == 2.0.1+cu118
torchaudio == 2.0.2+cu118
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Init a Transformer agent
2. Init a text which contains numbers. For example text = "More than 10 people have been killed by Covid."
3. Call the agent for a text-to-speech (SpeechT5). For example, audio_translated = agent.run("Read out loud the text", text=text)
4. Play the generated audio
The audio blanks all the numbers/digits.
I am suspecting SpeechT5 to behave wrongly as the code generated by the agent seems to be correct.
Good luck :)
### Expected behavior
The audio file should contain numbers/digits indicated in the text.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23480/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23480/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23479
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23479/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23479/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23479/events
|
https://github.com/huggingface/transformers/pull/23479
| 1,717,654,647 |
PR_kwDOCUB6oc5Q6grG
| 23,479 |
4-bit QLoRA via bitsandbytes (4-bit base model + LoRA)
|
{
"login": "TimDettmers",
"id": 5260050,
"node_id": "MDQ6VXNlcjUyNjAwNTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5260050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TimDettmers",
"html_url": "https://github.com/TimDettmers",
"followers_url": "https://api.github.com/users/TimDettmers/followers",
"following_url": "https://api.github.com/users/TimDettmers/following{/other_user}",
"gists_url": "https://api.github.com/users/TimDettmers/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TimDettmers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TimDettmers/subscriptions",
"organizations_url": "https://api.github.com/users/TimDettmers/orgs",
"repos_url": "https://api.github.com/users/TimDettmers/repos",
"events_url": "https://api.github.com/users/TimDettmers/events{/privacy}",
"received_events_url": "https://api.github.com/users/TimDettmers/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Amazing! \r\ncc @SunMarc for visibility as well! \r\nWill review asap 💪 ",
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you for the review! I created a separate PR for the LLaMA bug: https://github.com/huggingface/transformers/pull/23535\r\n\r\nThe optimizers are also needed for the 4-bit fine-tuning of LLaMA 30B/65B on a single 24/48 GB GPU: https://github.com/huggingface/transformers/pull/23217\r\n\r\nI reverted the names and removed code into other PRs. I added the missing test files. These, however are also renamed. Is the filename important for these?\r\n\r\nLet me know if you see any other issues. Thank you, Sylvain!",
"@TimDettmers Can the bitsandbytes team provide a corresponding bitsandbytes branch that corresponds to this PR? The \"closed_beta\" on bitsandbytes is the closest I found but it doesn't appear to be final/rc-quality and contains debug print logs such as \r\n\r\nhttps://github.com/TimDettmers/bitsandbytes/compare/main...closed_beta#diff-4d235c7e595546c6656c229dfa139298ce6602b356c2d0bafcb2352eb2cfae79R222\r\n\r\nWithout the proper branch/link to bitsandbytes changes, it is very hard to test this. Since this PR is public, the bnb branch should no longer be in closed beta. \r\n\r\nThank you.\r\n\r\n",
"I think the changes for QLoRA were recently merged in the main branch of bitsandbytes."
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR introduces 4-bit QLoRA to transformers. The main changes are for the bitsandbytes config. Additionally, we added one change to the LLaMA implementation where there is a bug that the data type chances if layer norms are in 32-bit and the rest is in bf16.
More information about QLoRA from our abstract:
>We develop QLoRA tuning, a method that finetunes by backpropagating gradients through a frozen 4-bit base model into low rank adapters (LoRA). With QLoRA tuning we can finetune 30B/65B parameter models on 24/48GB GPUs while preserving regular 16-bit full finetuning runtime and task performance. We achieve the memory efficiency and quantization precision through a combination of new methods: nested quantization to reduce the average memory footprint from 4.5 to 4.1 bits per parameter, paged optimizers to manage gradient checkpointing memory spikes, and a new data type, 4-bit NormalFloat (NF4), which is information theoretically and empirically optimal for normally distributed weights. To demonstrate the effectiveness and ease of use of QLoRA tuning we finetune more than 1,000 models to create a detailed dissection of instruction following performance across datasets (FLAN, Alpaca, Chip2, SuperNatural Instructions, Chip2, AnthropicHH), models types (LLaMA, T5), and model scales (125M to 65B). A discussion of the results is forthcoming in our paper.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@sgugger @younesbelkada @sourabh112
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23479/reactions",
"total_count": 33,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 22,
"rocket": 7,
"eyes": 4
}
|
https://api.github.com/repos/huggingface/transformers/issues/23479/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23479",
"html_url": "https://github.com/huggingface/transformers/pull/23479",
"diff_url": "https://github.com/huggingface/transformers/pull/23479.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23479.patch",
"merged_at": 1684925565000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23478
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23478/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23478/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23478/events
|
https://github.com/huggingface/transformers/pull/23478
| 1,717,545,896 |
PR_kwDOCUB6oc5Q6I1l
| 23,478 |
Fix DeepSpeed stuff in the nightly CI
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
Similar to #23463, but for the nightly CI (i.e. the nightly version of torch + deepspeed instead of the stable release).
This should be the last piece to make all the CI workflow to run (🤞)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23478/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23478/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23478",
"html_url": "https://github.com/huggingface/transformers/pull/23478",
"diff_url": "https://github.com/huggingface/transformers/pull/23478.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23478.patch",
"merged_at": 1684521116000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23477
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23477/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23477/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23477/events
|
https://github.com/huggingface/transformers/pull/23477
| 1,717,516,182 |
PR_kwDOCUB6oc5Q6CcA
| 23,477 |
Better TF docstring types
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The tests seem to indicate we use those type annotations for something (auto-detecting the labels would be my guess).",
"We use them in one of the TF tests, in a way that causes issues with the `from __future__` import. I think there's an easy workaround for the affected test, though - working on it!",
"@sgugger issues have been resolved. What happened was two of the tests were trying to figure out which models were trainable using the type annotations if they were there, which was kind of flaky anyway and broke when we did this. I refactored the relevant code, which enabled some tests that were previously being skipped, and that surfaced a couple of small issues in the tests. The models are all fine, though, and everything passes now!"
] | 1,684 | 1,684 | 1,684 |
MEMBER
| null |
cc @sgugger - this is the PR based on the conversation we had on Slack! I just scanned our TF files and replaced some `Optional[]` and `Union[]` patterns with `|` instead. The doc builder now writes much cleaner docstrings, instead of the `tensorflow.python.framework.ops.Tensor` stuff it was writing before.
Some tests are failing because the test files also need `from __future__ import annotations` I think - will make sure everything passes before merging this.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23477/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23477/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23477",
"html_url": "https://github.com/huggingface/transformers/pull/23477",
"diff_url": "https://github.com/huggingface/transformers/pull/23477.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23477.patch",
"merged_at": 1684932773000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23476
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23476/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23476/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23476/events
|
https://github.com/huggingface/transformers/issues/23476
| 1,717,492,999 |
I_kwDOCUB6oc5mXt0H
| 23,476 |
Flyte Callback
|
{
"login": "peridotml",
"id": 106936600,
"node_id": "U_kgDOBl-5GA",
"avatar_url": "https://avatars.githubusercontent.com/u/106936600?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peridotml",
"html_url": "https://github.com/peridotml",
"followers_url": "https://api.github.com/users/peridotml/followers",
"following_url": "https://api.github.com/users/peridotml/following{/other_user}",
"gists_url": "https://api.github.com/users/peridotml/gists{/gist_id}",
"starred_url": "https://api.github.com/users/peridotml/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peridotml/subscriptions",
"organizations_url": "https://api.github.com/users/peridotml/orgs",
"repos_url": "https://api.github.com/users/peridotml/repos",
"events_url": "https://api.github.com/users/peridotml/events{/privacy}",
"received_events_url": "https://api.github.com/users/peridotml/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi there! You can definitely open a PR. `integrations.py` is open to the community and callbacks defined there are maintained by their authors and not by us. So as long as you're fine getting pinged by users who get a problem with this callback in the future, please share your new callback :-) "
] | 1,684 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
### Feature request
Hi! I am an OSS contributor to [Flyte](https://flyte.org/), a workflow orchestration tool. I am working on a HuggingFace plugin to integrate better with Hugging Face.
I am working on a `FlyteCallback` that will automatically integrate with Flyte's checkpointing, visualizations, and eventually logging!
I was hoping that we could add it to the `integrations.py` similar to other ML Tools, but I wanted to check with you all before making a PR.
### Motivation
A good way to help Flyte users working with Hugging Face and Hugging Face users who end up using Flyte for orchestration.
### Your contribution
I would clean up and extend the callback in this [gist](https://gist.github.com/peridotml/68f376f0f4fd1926fb0746daaeea09f8) and create a PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23476/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23476/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23475
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23475/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23475/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23475/events
|
https://github.com/huggingface/transformers/pull/23475
| 1,717,460,734 |
PR_kwDOCUB6oc5Q52K9
| 23,475 |
Fix: Change tensors to integers for torch.dynamo and torch.compile compatibility
|
{
"login": "loevlie",
"id": 59749099,
"node_id": "MDQ6VXNlcjU5NzQ5MDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/59749099?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/loevlie",
"html_url": "https://github.com/loevlie",
"followers_url": "https://api.github.com/users/loevlie/followers",
"following_url": "https://api.github.com/users/loevlie/following{/other_user}",
"gists_url": "https://api.github.com/users/loevlie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/loevlie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loevlie/subscriptions",
"organizations_url": "https://api.github.com/users/loevlie/orgs",
"repos_url": "https://api.github.com/users/loevlie/repos",
"events_url": "https://api.github.com/users/loevlie/events{/privacy}",
"received_events_url": "https://api.github.com/users/loevlie/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"You undid your changes, you will need to make them in the file that mas2former copies from 😅 ",
"Oh bummer! Thank you for your patience. I will try to fix it. "
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes errors when trying to use PyTorch 2.0 torch.compile().
In the two fixes I suggested, we addressed the following issue:
torch.split() was expecting a list of integers specifying how to split the tensor along a given dimension. However, it received a list of scalar tensors instead. This mismatch was causing the TorchRuntimeError.
In the first instance, the tensor value_spatial_shapes was passed to the function as a list of scalar tensors, which resulted in the error. By converting them to integers using the .item() method, the error was resolved.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@amyeroberts @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23475/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23475/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23475",
"html_url": "https://github.com/huggingface/transformers/pull/23475",
"diff_url": "https://github.com/huggingface/transformers/pull/23475.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23475.patch",
"merged_at": 1684515011000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23474
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23474/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23474/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23474/events
|
https://github.com/huggingface/transformers/issues/23474
| 1,717,358,437 |
I_kwDOCUB6oc5mXM9l
| 23,474 |
graphormer.collating_graphormer.preprocess_item TypeError: Cannot cast array data from dtype('int64') to dtype('int32') according to the rule 'safe'
|
{
"login": "YueZhengMeng",
"id": 57550650,
"node_id": "MDQ6VXNlcjU3NTUwNjUw",
"avatar_url": "https://avatars.githubusercontent.com/u/57550650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YueZhengMeng",
"html_url": "https://github.com/YueZhengMeng",
"followers_url": "https://api.github.com/users/YueZhengMeng/followers",
"following_url": "https://api.github.com/users/YueZhengMeng/following{/other_user}",
"gists_url": "https://api.github.com/users/YueZhengMeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YueZhengMeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YueZhengMeng/subscriptions",
"organizations_url": "https://api.github.com/users/YueZhengMeng/orgs",
"repos_url": "https://api.github.com/users/YueZhengMeng/repos",
"events_url": "https://api.github.com/users/YueZhengMeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/YueZhengMeng/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @clefourrier ",
"Hi @YueZhengMeng, thank you for reporting this!\r\nI'll check it this week. Btw, using int64s everywhere will make the memory explode considerably faster for large graphs, if you need to change it manually in the mean time, it's likely you should use int32s instead.",
"I have not been able to reproduce your bug, but I'm not on a Windows machine.\r\nCould you provide me with a snippet of code and the complete trace of your error?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,688 | 1,688 |
NONE
| null |
### System Info
- `transformers` version: 4.28.1
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.9.15
- Huggingface_hub version: 0.10.1
- Safetensors version: not installed
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The transformers.models.graphormer.collating_graphormer.preprocess_item cause TypeError: Cannot cast array data from dtype('int64') to dtype('int32') according to the rule 'safe'
When data_processed = dataset.map(preprocess_item, batched=False), the dataset is ogb-molhiv
Because the data type in this function is np.int64, but in transformers\models\graphormer\algos_graphormer.pyx is np.int32 or np.long
I fixed it by replace "long" in algos_graphormer.pyx line 88 and 89 by np.int64 ,and replace all 32 in algos_graphormer.pyx by 64
But I don't think this is a appropritate aproach
### Expected behavior
how to fix this issue without modifying the library code
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23474/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23474/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23473
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23473/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23473/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23473/events
|
https://github.com/huggingface/transformers/pull/23473
| 1,717,328,728 |
PR_kwDOCUB6oc5Q5ZQJ
| 23,473 |
Use config to set name and description if not present
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
This PR makes sure we read the name and description of a donwloaded tool in the config to set them in the tool class if they were not set by the user. We also perform a consistency check and override values by using the tool config when they are different, as the tool config should be the source of truth.
Fies #23469
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23473/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23473/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23473",
"html_url": "https://github.com/huggingface/transformers/pull/23473",
"diff_url": "https://github.com/huggingface/transformers/pull/23473.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23473.patch",
"merged_at": 1684506975000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23472
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23472/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23472/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23472/events
|
https://github.com/huggingface/transformers/issues/23472
| 1,717,323,034 |
I_kwDOCUB6oc5mXEUa
| 23,472 |
TypeError: 'type' object is not subscriptable
|
{
"login": "flckv",
"id": 103381497,
"node_id": "U_kgDOBil5-Q",
"avatar_url": "https://avatars.githubusercontent.com/u/103381497?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flckv",
"html_url": "https://github.com/flckv",
"followers_url": "https://api.github.com/users/flckv/followers",
"following_url": "https://api.github.com/users/flckv/following{/other_user}",
"gists_url": "https://api.github.com/users/flckv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flckv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flckv/subscriptions",
"organizations_url": "https://api.github.com/users/flckv/orgs",
"repos_url": "https://api.github.com/users/flckv/repos",
"events_url": "https://api.github.com/users/flckv/events{/privacy}",
"received_events_url": "https://api.github.com/users/flckv/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @flckv, thanks for raising this error.\r\n\r\nI'm unable to reproduce this error when I run locally on the main branch. Could you share the running environment being used: run `transformers-cli env` in the terminal and copy-paste the output?",
"Hi @amyeroberts, thanks for the quick reply.\r\n\r\n\r\n### The output of transformers-cli env: \r\n\r\n```\r\n- `transformers` version: 4.26.1\r\n- Platform: Linux-5.4.204-ql-generic-12.0-19-x86_64-with-glibc2.31\r\n- Python version: 3.9.7\r\n- Huggingface_hub version: 0.10.1\r\n- PyTorch version (GPU?): 1.11.0 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n\r\n```\r\n\r\n\r\nI am running on a cluster with resources: \r\n```\r\n#SBATCH --job-name=ol # Job name\r\n#SBATCH --output=/home/flck/output_.%A.txt # Standard output and error log\r\n#SBATCH --nodes=1 # Run all processes on a single node \r\n#SBATCH --ntasks=1 # Run on a single CPU\r\n#SBATCH --mem=64G # Total RAM to be used\r\n#SBATCH --cpus-per-task=8 # Number of CPU cores\r\n#SBATCH --gres=gpu:3 # Number of GPUs (per node)\r\n#SBATCH -p gpu # Use the gpu partition\r\n#SBATCH --time=12:00:00 # Specify the time needed for your experiment\r\n#SBATCH --qos=gpu-8 # To enable the use of up to 8 GPUs\r\n```\r\n\r\nin my .sh file that I run on this cluster has these commands to reproduce the demo : \r\n\r\ntransformers-cli env\r\n`accelerate launch wav2vec/run_wav2vec2_pretraining_no_trainer.py --cache_dir=\"/dev/shm/\" --dataset_name=\"librispeech_asr\" --dataset_config_names clean clean --dataset_split_names validation test --model_name_or_path=\"patrickvonplaten/wav2vec2-base-v2\" --output_dir=\"./wav2vec2-pretrained-demo\" --max_train_steps=\"20000\" --num_warmup_steps=\"32000\" --gradient_accumulation_steps=\"8\" --learning_rate=\"0.005\" --weight_decay=\"0.01\" --max_duration_in_seconds=\"20.0\" --min_duration_in_seconds=\"2.0\" --logging_steps=\"1\" --saving_steps=\"10000\" --per_device_train_batch_size=\"8\" --per_device_eval_batch_size=\"8\" --adam_beta1=\"0.9\" --adam_beta2=\"0.98\" --adam_epsilon=\"1e-06\" --gradient_checkpointing --mask_time_prob=\"0.65\" --mask_time_length=\"10\"`\r\ntransformers-cli env\r\n\r\n\r\n\r\nIs this what you are asking for? \r\n\r\n\r\n-------------------------------------------------------------------------------------------------\r\n-------------------------------------------------------------------------------------------------\r\n-------------------------------------------------------------------------------------------------\r\n\r\n\r\n\r\n\r\n\r\n\r\n### My guess\r\nI think the error is coming from the fact that the dataset preprocessing [(line 473)](https://github.com/huggingface/transformers/blob/3658488ff77ff8d45101293e749263acf437f4d5/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py#L473) requires argument `\"args.audio_column_name\" `that is not specified in the demo command https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-pretraining/README.md#demo. \r\n\r\n### 1. I tried specifying --audio_column_name= []\r\n```\r\nI got this error:\r\n\r\n _-- schema metadata --\r\nhuggingface: '{\"info\": {\"features\": {\"id\": {\"dtype\": \"string\", \"_type\": \"' + 163\r\nto\r\n{'id': Value(dtype='string', id=None), 'audio': Audio(sampling_rate=16000, mono=True, decode=True, id=None), 'duration_ms': Value(dtype='int32', id=None), 'text': Value(dtype='string', id=None), '[]': Audio(sampling_rate=16000, mono=True, decode=True, id=None)}\r\nbecause column names don't match_\r\n```\r\n\r\n### 2. I tried specifying --audio_column_name=[\"audio\", \"duration_ms\", \"text\"]\r\n`error: \"run_wav2vec2_pretraining_no_trainer.py: error: unrecognized arguments: duration_ms, text]\"\r\n`\r\n\r\n### 3. I tried specifying --audio_column_name=[\"audio\"], which is the default setting\r\nsame issue as in 1. \r\n\r\n```\r\nline 478, in main\r\n raw_datasets = raw_datasets.cast_column(\r\nraise ValueError(f\"Couldn't cast\\n{table.schema}\\nto\\n{features}\\nbecause column names don't match\")\r\nValueError: Couldn't cast\r\nid: string\r\naudio: struct<bytes: binary, path: string>\r\n child 0, bytes: binary\r\n child 1, path: string\r\nduration_ms: int32\r\ntext: string\r\n-- schema metadata --\r\nhuggingface: '{\"info\": {\"features\": {\"id\": {\"dtype\": \"string\", \"_type\": \"' + 163\r\nto\r\n{'id': Value(dtype='string', id=None), 'audio': Audio(sampling_rate=16000, mono=True, decode=True, id=None), 'duration_ms': Value(dtype='int32', id=None), 'text': Value(dtype='string', id=None), '[audio]': Audio(sampling_rate=16000, mono=True, decode=True, id=None)}\r\nbecause column names don't match\r\n```\r\n\r\n\r\n\r\n\r\n\r\nAny ideas? @amyeroberts @sanchit-gandhi @pacman100 @sgugger\r\n\r\n\r\n\r\n",
"here is a more detailed output log content:\r\n\r\n```\r\nThe following values were not passed to `accelerate launch` and had defaults used instead:\r\n\t`--num_processes` was set to a value of `3`\r\n\t\tMore than one GPU was found, enabling multi-GPU training.\r\n\t\tIf this was unintended please pass in `--num_processes=1`.\r\n\t`--num_machines` was set to a value of `1`\r\n\t`--mixed_precision` was set to a value of `'no'`\r\n\t`--dynamo_backend` was set to a value of `'no'`\r\nTo avoid this warning pass in values for each of the problematic parameters or run `accelerate config`.\r\nwandb: Currently logged in as: flckv. Use `wandb login --relogin` to force relogin\r\nwandb: wandb version 0.15.3 is available! To upgrade, please run:\r\nwandb: $ pip install wandb --upgrade\r\nwandb: Tracking run with wandb version 0.15.2\r\nwandb: Run data is saved locally in /home/flckv/wandb/run-20230519_175326-yuyk0qvn\r\nwandb: Run `wandb offline` to turn off syncing.\r\nwandb: Syncing run rural-morning-6\r\nwandb: ⭐️ View project at https://wandb.ai/flckv/wav2vec2-pretrained-demo\r\nwandb: 🚀 View run at https://wandb.ai/flckv/wav2vec2-pretrained-demo/runs/yuyk0qvn\r\nDownloading and preparing dataset librispeech_asr/clean to /dev/shm/librispeech_asr/clean/2.1.0/cff5df6e7955c80a67f80e27e7e655de71c689e2d2364bece785b972acb37fe7...\r\n\r\nDownloading data files: 0%| | 0/4 [00:00<?, ?it/s]\r\nDownloading data files: 100%|██████████| 4/4 [00:00<00:00, 9828.48it/s]\r\n\r\nExtracting data files: 0%| | 0/4 [00:00<?, ?it/s]\r\nExtracting data files: 100%|██████████| 4/4 [00:00<00:00, 2225.39it/s]\r\n\r\n\r\nGenerating train.100 split: 100%|██████████| 28539/28539 [00:17<00:00, 1834.73 examples/s]\r\n \r\n\r\nGenerating train.360 split: 100%|██████████| 104014/104014 [01:00<00:00, 1634.61 examples/s]\r\n \r\n\r\nGenerating validation split: 100%|██████████| 2703/2703 [00:01<00:00, 2341.79 examples/s]\r\n \r\n\r\nGenerating test split: 93%|█████████▎| 2434/2620 [00:01<00:00, 2338.24 examples/s]\r\n \r\nDataset librispeech_asr downloaded and prepared to /dev/shm/librispeech_asr/clean/2.1.0/cff5df6e7955c80a67f80e27e7e655de71c689e2d2364bece785b972acb37fe7. Subsequent calls will reuse this data.\r\nFound cached dataset librispeech_asr (/dev/shm/librispeech_asr/clean/2.1.0/cff5df6e7955c80a67f80e27e7e655de71c689e2d2364bece785b972acb37fe7)\r\nFound cached dataset librispeech_asr (/dev/shm/librispeech_asr/clean/2.1.0/cff5df6e7955c80a67f80e27e7e655de71c689e2d2364bece785b972acb37fe7)\r\n\r\nDownloading: 0%| | 0.00/214 [00:00<?, ?B/s]\r\nDownloading: 100%|██████████| 214/214 [00:00<00:00, 171kB/s]\r\nloading configuration file preprocessor_config.json from cache at /home/flckv/.cache/huggingface/hub/models--patrickvonplaten--wav2vec2-base-v2/snapshots/9371f1849947b4613f451680a8e96d907617ce86/preprocessor_config.json\r\nFeature extractor Wav2Vec2FeatureExtractor {\r\n \"do_normalize\": true,\r\n \"feature_extractor_type\": \"Wav2Vec2FeatureExtractor\",\r\n \"feature_size\": 1,\r\n \"padding_side\": \"right\",\r\n \"padding_value\": 0.0,\r\n \"return_attention_mask\": true,\r\n \"sampling_rate\": 16000\r\n}\r\n\r\n\r\nMap: 0%| | 0/5270 [00:00<?, ? examples/s]\r\nMap: 0%| | 0/5270 [00:01<?, ? examples/s]\r\n \r\n\r\n> \r\n> Traceback (most recent call last):\r\n> File \"/home/flckv/wav2vec/run_wav2vec2_pretraining_no_trainer.py\", line 783, in <module>\r\n> main()\r\n> File \"/home/flckv/wav2vec/run_wav2vec2_pretraining_no_trainer.py\", line 510, in main\r\n> vectorized_datasets = raw_datasets.map(\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/dataset_dict.py\", line 852, in map\r\n> {\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/dataset_dict.py\", line 853, in <dictcomp>\r\n> k: dataset.map(\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 563, in wrapper\r\n> out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 528, in wrapper\r\n> out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 2953, in map\r\n> for rank, done, content in Dataset._map_single(**dataset_kwargs):\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 3307, in _map_single\r\n> example = apply_function_on_filtered_inputs(example, i, offset=offset)\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 3210, in apply_function_on_filtered_inputs\r\n> processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)\r\n> File \"/home/flckv/wav2vec/run_wav2vec2_pretraining_no_trainer.py\", line 493, in prepare_dataset\r\n> sample = batch[args.audio_column_name]\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/formatting/formatting.py\", line 282, in __getitem__\r\n> value = self.format(key)\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/formatting/formatting.py\", line 380, in format\r\n> return self.formatter.format_column(self.pa_table.select([key]))[0]\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/formatting/formatting.py\", line 447, in format_column\r\n> column = self.python_features_decoder.decode_column(column, pa_table.column_names[0])\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/formatting/formatting.py\", line 228, in decode_column\r\n> return self.features.decode_column(column, column_name) if self.features else column\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/features/features.py\", line 1866, in decode_column\r\n> [decode_nested_example(self[column_name], value) if value is not None else None for value in column]\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/features/features.py\", line 1866, in <listcomp>\r\n> [decode_nested_example(self[column_name], value) if value is not None else None for value in column]\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/features/features.py\", line 1308, in decode_nested_example\r\n> return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/features/audio.py\", line 164, in decode_example\r\n> array, sampling_rate = self._decode_non_mp3_file_like(file)\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/features/audio.py\", line 290, in _decode_non_mp3_file_like\r\n> array = librosa.to_mono(array)\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/lazy_loader/__init__.py\", line 77, in __getattr__\r\n> attr = getattr(submod, name)\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/lazy_loader/__init__.py\", line 76, in __getattr__\r\n> submod = importlib.import_module(submod_path)\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/importlib/__init__.py\", line 127, in import_module\r\n> return _bootstrap._gcd_import(name[level:], package, level)\r\n> File \"<frozen importlib._bootstrap>\", line 1030, in _gcd_import\r\n> File \"<frozen importlib._bootstrap>\", line 1007, in _find_and_load\r\n> File \"<frozen importlib._bootstrap>\", line 986, in _find_and_load_unlocked\r\n> File \"<frozen importlib._bootstrap>\", line 680, in _load_unlocked\r\n> File \"<frozen importlib._bootstrap_external>\", line 850, in exec_module\r\n> File \"<frozen importlib._bootstrap>\", line 228, in _call_with_frames_removed\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/librosa/core/audio.py\", line 19, in <module>\r\n> from .convert import frames_to_samples, time_to_samples\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/librosa/core/convert.py\", line 7, in <module>\r\n> from . import notation\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/librosa/core/notation.py\", line 8, in <module>\r\n> from .intervals import INTERVALS\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/librosa/core/intervals.py\", line 10, in <module>\r\n> from numpy.typing import ArrayLike\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/numpy/typing/__init__.py\", line 158, in <module>\r\n> from numpy._typing import (\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/numpy/_typing/__init__.py\", line 164, in <module>\r\n> from ._dtype_like import (\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/numpy/_typing/_dtype_like.py\", line 17, in <module>\r\n> from ._generic_alias import _DType as DType\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/numpy/_typing/_generic_alias.py\", line 241, in <module>\r\n> _DType = np.dtype[ScalarType]\r\n> TypeError: 'type' object is not subscriptable\r\n> wandb: Waiting for W&B process to finish... (failed 1). Press Control-C to abort syncing.\r\n> wandb: 🚀 View run rural-morning-6 at: https://wandb.ai/flckv/wav2vec2-pretrained-demo/runs/yuyk0qvn\r\n> wandb: Synced 5 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)\r\n> wandb: Find logs at: ./wandb/run-20230519_175326-yuyk0qvn/logs\r\n> WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 3914267 closing signal SIGTERM\r\n> WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 3914268 closing signal SIGTERM\r\n> ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 3914266) of binary: /home/flckv/.conda/envs/vcheckworthy/bin/python\r\n> Traceback (most recent call last):\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/bin/accelerate\", line 8, in <module>\r\n> sys.exit(main())\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/accelerate/commands/accelerate_cli.py\", line 45, in main\r\n> args.func(args)\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/accelerate/commands/launch.py\", line 909, in launch_command\r\n> multi_gpu_launcher(args)\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/accelerate/commands/launch.py\", line 604, in multi_gpu_launcher\r\n> distrib_run.run(args)\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/torch/distributed/run.py\", line 715, in run\r\n> elastic_launch(\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/torch/distributed/launcher/api.py\", line 131, in __call__\r\n> return launch_agent(self._config, self._entrypoint, list(args))\r\n> File \"/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/torch/distributed/launcher/api.py\", line 245, in launch_agent\r\n> raise ChildFailedError(\r\n> torch.distributed.elastic.multiprocessing.errors.ChildFailedError: \r\n> \r\n> wav2vec/run_wav2vec2_pretraining_no_trainer.py FAILED\r\n> ------------------------------------------------------------\r\n> Failures:\r\n> <NO_OTHER_FAILURES>\r\n> ------------------------------------------------------------\r\n> Root Cause (first observed failure):\r\n> [0]:\r\n> time : 2023-05-19_17:55:10\r\n> host : gpu-08\r\n> rank : 0 (local_rank: 0)\r\n> exitcode : 1 (pid: 3914266)\r\n> error_file: <N/A>\r\n> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html\r\n> \r\n> /var/lib/slurm-llnl/slurmd/job151161/slurm_script: line 42: EOL: command not found\r\n> \r\n```",
"Hey @flckv! Could you try first updating all your packages to the latest versions?\r\n\r\n```\r\npip install --upgrade pip\r\npip install --upgrade soundfile librosa datasets accelerate numpy transformers\r\n```\r\n\r\nThe error looks like it's happening when we decode the soundfile (i.e. as we read the soundfile with librosa) - there was recently a big change to how we load audio samples with datasets that might fix this for you https://github.com/huggingface/datasets/pull/5573",
"@sanchit-gandhi Thanks, but now the command is not working:\r\n\r\n`accelerate launch wav2vec/run_wav2vec2_pretraining_no_trainer.py --cache_dir=\"/dev/shm/\" --dataset_name=\"librispeech_asr\" --dataset_config_names test --dataset_split_names test --model_name_or_path=\"patrickvonplaten/wav2vec2-base-v2\" --output_dir=\"./wav2vec2-pretrained-demo\" --max_train_steps=\"20000\" --num_warmup_steps=\"32000\" --gradient_accumulation_steps=\"8\" --learning_rate=\"0.005\" --weight_decay=\"0.01\" --max_duration_in_seconds=\"20.0\" --min_duration_in_seconds=\"2.0\" --logging_steps=\"1\" --saving_steps=\"10000\" --per_device_train_batch_size=\"8\" --per_device_eval_batch_size=\"8\" --adam_beta1=\"0.9\" --adam_beta2=\"0.98\" --adam_epsilon=\"1e-06\" --gradient_checkpointing --mask_time_prob=\"0.65\" --mask_time_length=\"10\"\r\n`\r\n\r\n\r\nERROR:\r\n\r\n\r\n\r\nTraceback (most recent call last):\r\n File \"/home/flck/wav2vec/run_wav2vec2_pretraining_no_trainer.py\", line 785, in <module>\r\n main()\r\n File \"/home/flck/wav2vec/run_wav2vec2_pretraining_no_trainer.py\", line 513, in main\r\n prepare_dataset(raw_datasets[\"train\"]), # loading the audio `raise KeyError(f\"Column {key} not in the dataset. Current columns in the dataset: {columns}\") KeyError: \"Column args.audio_column_name not in the dataset. Current columns in the dataset: ['id', 'audio', 'duration_ms', 'text']\"`\r\n File \"/home/flck/wav2vec/run_wav2vec2_pretraining_no_trainer.py\", line 493, in prepare_dataset\r\n`sample = batch['args.audio_column_name']`\r\n File \"/home/flck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 2778, in __getitem__\r\n return self._getitem(key)\r\n File \"/home/flck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 2762, in _getitem\r\n pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)\r\n File \"/home/flck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/formatting/formatting.py\", line 575, in query_table\r\n _check_valid_column_key(key, table.column_names)\r\n File \"/home/flck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/formatting/formatting.py\", line 515, in _check_valid_column_key\r\n\r\n```\r\nraise KeyError(f\"Column {key} not in the dataset. Current columns in the dataset: {columns}\")\r\nKeyError: \"Column args.audio_column_name not in the dataset. Current columns in the dataset: ['id', 'audio', 'duration_ms', 'text']\"\r\n```\r\n\r\n\r\nTraceback (most recent call last):\r\n File \"/home/flck/.conda/envs/vcheckworthy/bin/accelerate\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/home/flck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/accelerate/commands/accelerate_cli.py\", line 45, in main\r\n args.func(args)\r\n File \"/home/flck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/accelerate/commands/launch.py\", line 918, in launch_command\r\n simple_launcher(args)\r\n File \"/home/flck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/accelerate/commands/launch.py\", line 580, in simple_launcher\r\n raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)\r\nsubprocess.CalledProcessError: Command '['/home/flck/.conda/envs/vcheckworthy/bin/python', 'wav2vec/run_wav2vec2_pretraining_no_trainer.py', '--cache_dir=/dev/shm/', '--dataset_name=librispeech_asr', '--dataset_config_names', 'test', '--dataset_split_names', 'test', '--model_name_or_path=patrickvonplaten/wav2vec2-base-v2', '--output_dir=./wav2vec2-pretrained-demo', '--max_train_steps=20000', '--num_warmup_steps=32000', '--gradient_accumulation_steps=8', '--learning_rate=0.005', '--weight_decay=0.01', '--max_duration_in_seconds=20.0', '--min_duration_in_seconds=2.0', '--logging_steps=1', '--saving_steps=10000', '--per_device_train_batch_size=8', '--per_device_eval_batch_size=8', '--adam_beta1=0.9', '--adam_beta2=0.98', '--adam_epsilon=1e-06', '--gradient_checkpointing', '--mask_time_prob=0.65', '--mask_time_length=10']' returned non-zero exit status 1.\r\n\r\n/var/lib/slurm-llnl/slurmd/job153086/slurm_script: line 45: EOL: command not found\r\n\r\n\r\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n\r\n\r\nWHEN I specify this in the command args \r\n\r\naccelerate launch wav2vec/run_wav2vec2_pretraining_no_trainer.py --cache_dir=\"/dev/shm/\" --dataset_name=\"librispeech_asr\" --dataset_config_names test --dataset_split_names test --model_name_or_path=\"patrickvonplaten/wav2vec2-base-v2\" --output_dir=\"./wav2vec2-pretrained-demo\"` --audio_column_name=[\"id\", \"audio\", \"duration_ms\", \"text\"] `--max_train_steps=\"20000\" --num_warmup_steps=\"32000\" --gradient_accumulation_steps=\"8\" --learning_rate=\"0.005\" --weight_decay=\"0.01\" --max_duration_in_seconds=\"20.0\" --min_duration_in_seconds=\"2.0\" --logging_steps=\"1\" --saving_steps=\"10000\" --per_device_train_batch_size=\"8\" --per_device_eval_batch_size=\"8\" --adam_beta1=\"0.9\" --adam_beta2=\"0.98\" --adam_epsilon=\"1e-06\" --gradient_checkpointing --mask_time_prob=\"0.65\" --mask_time_length=\"10\"\r\n\r\n\r\nthen the error is: \r\n\r\n\r\n`run_wav2vec2_pretraining_no_trainer.py: error: unrecognized arguments: audio, duration_ms, text]`\r\n\r\n\r\n\r\n\r\n\r\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n\r\n\r\n\r\n\r\nWHEN I only add \"id\": \r\naccelerate launch wav2vec/run_wav2vec2_pretraining_no_trainer.py --cache_dir=\"/dev/shm/\" --dataset_name=\"librispeech_asr\" --dataset_config_names test --dataset_split_names test --model_name_or_path=\"patrickvonplaten/wav2vec2-base-v2\" --output_dir=\"./wav2vec2-pretrained-demo\"` --audio_column_name=[\"id\"] `--max_train_steps=\"20000\" --num_warmup_steps=\"32000\" --gradient_accumulation_steps=\"8\" --learning_rate=\"0.005\" --weight_decay=\"0.01\" --max_duration_in_seconds=\"20.0\" --min_duration_in_seconds=\"2.0\" --logging_steps=\"1\" --saving_steps=\"10000\" --per_device_train_batch_size=\"8\" --per_device_eval_batch_size=\"8\" --adam_beta1=\"0.9\" --adam_beta2=\"0.98\" --adam_epsilon=\"1e-06\" --gradient_checkpointing --mask_time_prob=\"0.65\" --mask_time_length=\"10\"\r\n\r\n\r\n\r\n\r\nTraceback (most recent call last):\r\n File \"/home/flck/wav2vec/run_wav2vec2_pretraining_no_trainer.py\", line 785, in <module>\r\n main()\r\n File \"/home/flck/wav2vec/run_wav2vec2_pretraining_no_trainer.py\", line 478, in main\r\n raw_datasets = raw_datasets.cast_column(\r\n File \"/home/flck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/dataset_dict.py\", line 310, in cast_column\r\n return DatasetDict({k: dataset.cast_column(column=column, feature=feature) for k, dataset in self.items()})\r\n File \"/home/flck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/dataset_dict.py\", line 310, in <dictcomp>\r\n return DatasetDict({k: dataset.cast_column(column=column, feature=feature) for k, dataset in self.items()})\r\n File \"/home/flck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/fingerprint.py\", line 511, in wrapper\r\n out = func(dataset, *args, **kwargs)\r\n File \"/home/flck.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 2082, in cast_column\r\n dataset._data = dataset._data.cast(dataset.features.arrow_schema)\r\n File \"/home/flck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/table.py\", line 1152, in cast\r\n return MemoryMappedTable(table_cast(self.table, *args, **kwargs), self.path, replays)\r\n File \"/home/flck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/table.py\", line 2290, in table_cast\r\n return cast_table_to_schema(table, schema)\r\n File \"/home/flck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/table.py\", line 2248, in cast_table_to_schema\r\n raise ValueError(f\"Couldn't cast\\n{table.schema}\\nto\\n{features}\\nbecause column names don't match\")\r\n```\r\nValueError: Couldn't cast\r\nid: string\r\naudio: struct<bytes: binary, path: string>\r\n child 0, bytes: binary\r\n child 1, path: string\r\nduration_ms: int32\r\ntext: string\r\n```\r\n-- schema metadata --\r\nhuggingface: '{\"info\": {\"features\": {\"id\": {\"dtype\": \"string\", \"_type\": \"' + 163\r\nto\r\n`{'id': Value(dtype='string', id=None), 'audio': Audio(sampling_rate=16000, mono=True, decode=True, id=None), 'duration_ms': Value(dtype='int32', id=None), 'text': Value(dtype='string', id=None), '[id]': Audio(sampling_rate=16000, mono=True, decode=True, id=None)}`\r\nbecause column names don't match\r\nwandb: Waiting for W&B process to finish... (failed 1). Press Control-C to abort syncing.\r\nwandb: - 0.028 MB of 0.028 MB uploaded (0.000 MB deduped)\r\nwandb: \\ 0.028 MB of 0.032 MB uploaded (0.000 MB deduped)\r\nwandb: | 0.035 MB of 0.035 MB uploaded (0.000 MB deduped)\r\nwandb: 🚀 View run helpful-voice-29 at: https://wandb.ai/flck/wav2vec2-pretrained-demo/runs/tnmnebg6\r\nwandb: Synced 5 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)\r\nwandb: Find logs at: ./wandb/run-20230523_133136-tnmnebg6/logs\r\nTraceback (most recent call last):\r\n File \"/home/flck/.conda/envs/vcheckworthy/bin/accelerate\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/home/flck.conda/envs/vcheckworthy/lib/python3.9/site-packages/accelerate/commands/accelerate_cli.py\", line 45, in main\r\n args.func(args)\r\n File \"/homeflck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/accelerate/commands/launch.py\", line 918, in launch_command\r\n simple_launcher(args)\r\n File \"/home/flck.conda/envs/vcheckworthy/lib/python3.9/site-packages/accelerate/commands/launch.py\", line 580, in simple_launcher\r\n raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)\r\nsubprocess.CalledProcessError: Command '['/home/flck/.conda/envs/vcheckworthy/bin/python', 'wav2vec/run_wav2vec2_pretraining_no_trainer.py', '--cache_dir=/dev/shm/', '--dataset_name=librispeech_asr', '--dataset_config_names', 'test', '--dataset_split_names', 'test', '--model_name_or_path=patrickvonplaten/wav2vec2-base-v2', '--output_dir=./wav2vec2-pretrained-demo', '--audio_column_name=[id]', '--max_train_steps=20000', '--num_warmup_steps=32000', '--gradient_accumulation_steps=8', '--learning_rate=0.005', '--weight_decay=0.01', '--max_duration_in_seconds=20.0', '--min_duration_in_seconds=2.0', '--logging_steps=1', '--saving_steps=10000', '--per_device_train_batch_size=8', '--per_device_eval_batch_size=8', '--adam_beta1=0.9', '--adam_beta2=0.98', '--adam_epsilon=1e-06', '--gradient_checkpointing', '--mask_time_prob=0.65', '--mask_time_length=10']' returned non-zero exit status 1.\r\n\r\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two last points.\r\n\r\n- `transformers` version: 4.29.2\r\n- Platform: Linux-5.4.204-ql-generic-12.0-19-x86_64-with-glibc2.31\r\n- Python version: 3.9.7\r\n- Huggingface_hub version: 0.14.1\r\n- Safetensors version: not installed\r\n- PyTorch version (GPU?): 1.11.0 (False)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n\r\n/var/lib/slurm-llnl/slurmd/job153092/slurm_script: line 50: EOL: `command not found`\r\n/var/lib/slurm-llnl/slurmd/job153092/slurm_script: line 53: /home/flck/wav2vec/run_wav2vec2_pretraining_no_trainer.py: `Permission denied`\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"Hey @flckv - great! Glad updating to the latest packages fixed the previous error. Can you try setting:\r\n```\r\n--audio_column_name=\"audio\"\r\n```\r\nHere we just need to pick-out the correct column name for the audio inputs (which in this case is `\"audio\"`)",
"hey @sanchit-gandhi yes it is great! the column name is still not interpreted : \r\n\r\n\r\n\r\nI added what you said:\r\n\r\n\r\naccelerate launch wav2vec/run_wav2vec2_pretraining_no_trainer.py --cache_dir=\"/dev/shm/\" --dataset_name=\"librispeech_asr\" --dataset_config_names test --dataset_split_names test --model_name_or_path=\"patrickvonplaten/wav2vec2-base-v2\" --output_dir=\"./wav2vec2-pretrained-demo\" `--audio_column_name=\"audio\" `--max_train_steps=\"20000\" --num_warmup_steps=\"32000\" --gradient_accumulation_steps=\"8\" --learning_rate=\"0.005\" --weight_decay=\"0.01\" --max_duration_in_seconds=\"20.0\" --min_duration_in_seconds=\"2.0\" --logging_steps=\"1\" --saving_steps=\"10000\" --per_device_train_batch_size=\"8\" --per_device_eval_batch_size=\"8\" --adam_beta1=\"0.9\" --adam_beta2=\"0.98\" --adam_epsilon=\"1e-06\" --gradient_checkpointing --mask_time_prob=\"0.65\" --mask_time_length=\"10\"\r\n\r\n\r\n\r\ntried also: \r\n\r\n--audio_column_name='audio'\r\n--audio_column_name=['audio']\r\n--audio_column_name=[\"audio\"]\r\n\r\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\nBUT :\r\n\r\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n\r\n\r\nTraceback (most recent call last):\r\n File \"/home/flck/wav2vec/run_wav2vec2_pretraining_no_trainer.py\", line 785, in <module>\r\n main()\r\n File \"/home/flck/wav2vec/run_wav2vec2_pretraining_no_trainer.py\", line 513, in main\r\n prepare_dataset(raw_datasets[\"train\"]), ` raise KeyError(f\"Column {key} not in the dataset. Current columns in the dataset: {columns}\") KeyError: \"Column args.audio_column_name not in the dataset. Current columns in the dataset: ['id', 'audio', 'duration_ms', 'text']\"`\r\n File \"/home/flck/wav2vec/run_wav2vec2_pretraining_no_trainer.py\", line 493, in prepare_dataset\r\n ` sample = batch['args.audio_column_name'] `\r\n File \"/home/flck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 2778, in __getitem__\r\n return self._getitem(key)\r\n File \"/home/flck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 2762, in _getitem\r\n pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)\r\n File \"/home/flcks/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/formatting/formatting.py\", line 575, in query_table\r\n _check_valid_column_key(key, table.column_names)\r\n File \"/home/flck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/formatting/formatting.py\", line 515, in _check_valid_column_key\r\n raise KeyError(f\"Column {key} not in the dataset. Current columns in the dataset: {columns}\")\r\n`KeyError: \"Column args.audio_column_name not in the dataset. Current columns in the dataset: ['id', 'audio', 'duration_ms', 'text']\"`\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"Can you double check you haven't changed the parser args for `audio_column_name`?\r\nhttps://github.com/huggingface/transformers/blob/50a56bedb6ec8a4f9ba455c184d187cfee2e9c81/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py#L112-L117\r\n\r\nI can't see the check that is erroring out for you on the example script. Your error is occurring on line 513. If I check line 513 in the example, I get something completely different to the audio column name check: https://github.com/huggingface/transformers/blob/3d7baef1141e22520901310593c106b15493e6a9/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py#L513\r\n\r\nCould you make sure you are using the latest version of the script? You can just copy it from main.",
"@sanchit-gandhi thanks, you were right. It works now."
] | 1,684 | 1,685 | 1,685 |
NONE
| null |
### System Info
** pre-training wav2vec demo**
https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-pretraining/README.md running demo gives error:
File “./run_wav2vec2_pretraining_no_trainer.py", line 783, in <module>
main()
File “./run_wav2vec2_pretraining_no_trainer.py", line 510, in main
vectorized_datasets = raw_datasets.map(
TypeError: 'type' object is not subscriptable
### Who can help?
@sanchit-gandhi
@pacman100
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
just reproducing demo example with provided script and dataset:
https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-pretraining/README.md#demo
### Expected behavior
output should be a pre-trained wav2vec model on librispeech dataset
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23472/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23472/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23471
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23471/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23471/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23471/events
|
https://github.com/huggingface/transformers/pull/23471
| 1,717,231,489 |
PR_kwDOCUB6oc5Q5D1T
| 23,471 |
Fix PretrainedConfig `min_length` docstring
|
{
"login": "joaoareis",
"id": 34096208,
"node_id": "MDQ6VXNlcjM0MDk2MjA4",
"avatar_url": "https://avatars.githubusercontent.com/u/34096208?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joaoareis",
"html_url": "https://github.com/joaoareis",
"followers_url": "https://api.github.com/users/joaoareis/followers",
"following_url": "https://api.github.com/users/joaoareis/following{/other_user}",
"gists_url": "https://api.github.com/users/joaoareis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joaoareis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joaoareis/subscriptions",
"organizations_url": "https://api.github.com/users/joaoareis/orgs",
"repos_url": "https://api.github.com/users/joaoareis/repos",
"events_url": "https://api.github.com/users/joaoareis/events{/privacy}",
"received_events_url": "https://api.github.com/users/joaoareis/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Currently, the default values for the `min_length` parameter in the `PretrainedConfig` is set 0. However, the docstring says that the default value is 10 This PR fixes the docstring to reflect the correct default value.
Default value:
https://github.com/huggingface/transformers/blob/8aa8513f715faaa84cef1abd57ea4ded96c80e44/src/transformers/configuration_utils.py#L286
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23471/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23471/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23471",
"html_url": "https://github.com/huggingface/transformers/pull/23471",
"diff_url": "https://github.com/huggingface/transformers/pull/23471.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23471.patch",
"merged_at": 1684514915000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23470
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23470/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23470/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23470/events
|
https://github.com/huggingface/transformers/issues/23470
| 1,717,132,413 |
I_kwDOCUB6oc5mWVx9
| 23,470 |
An ability to pass a function to tokenizer to transform prompt
|
{
"login": "nikitastaf1996",
"id": 61453511,
"node_id": "MDQ6VXNlcjYxNDUzNTEx",
"avatar_url": "https://avatars.githubusercontent.com/u/61453511?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nikitastaf1996",
"html_url": "https://github.com/nikitastaf1996",
"followers_url": "https://api.github.com/users/nikitastaf1996/followers",
"following_url": "https://api.github.com/users/nikitastaf1996/following{/other_user}",
"gists_url": "https://api.github.com/users/nikitastaf1996/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nikitastaf1996/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nikitastaf1996/subscriptions",
"organizations_url": "https://api.github.com/users/nikitastaf1996/orgs",
"repos_url": "https://api.github.com/users/nikitastaf1996/repos",
"events_url": "https://api.github.com/users/nikitastaf1996/events{/privacy}",
"received_events_url": "https://api.github.com/users/nikitastaf1996/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @nikitastaf1996, thanks for raising this issue. \r\n\r\nSo that we can best understand the proposed feature - could you explain a bit more about the utility this would unlock? At the moment, it seems that a user could transform their prompts as needed before passing to the pipeline e.g.:\r\n\r\n```python\r\ndef transform(prompt: str) -> str:\r\n ...\r\n\r\nprompt = transform(original_prompt_string)\r\npipeline = TextGenerationPipeline(model=model, tokenizer=tokenizer, max_new_tokens=256, temperature=0.1)\r\noutputs = pipeline(prompt)\r\n```\r\n\r\ncc @Narsil @gante ",
"I will give you code. You will decide if it nessesary to implement. After some time has passed i am not so sure it is nesserally at all. It might be niche nuisance exclusive to me,\r\nCurrent code i use:\r\n```\r\n#@title Custom llm\r\nfrom langchain.llms.base import LLM\r\nfrom typing import Optional, List, Mapping, Any\r\nfrom transformers import TextGenerationPipeline\r\nclass CustomLLM(LLM):\r\n \r\n @property\r\n def _llm_type(self) -> str:\r\n return \"custom\"\r\n \r\n def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str:\r\n pipeline = TextGenerationPipeline(model=model, tokenizer=tokenizer,max_new_tokens=256,temperature=0.1)\r\n prompt = f\"\"\"User:{prompt}\r\n Assistant:\"\"\"\r\n prompt_len = len(prompt)\r\n result = pipeline(prompt)[0][\"generated_text\"][prompt_len:]\r\n if stop is not None:\r\n for stop_string in stop:\r\n index = result.find(stop_string) # find the index of the substring\r\n if index != -1:\r\n result = result[:index + len(stop_string)]\r\n break\r\n return result\r\n```\r\n\r\n\r\n```\r\n#@title Langchain Agent\r\nfrom langchain.agents import load_tools,initialize_agent,AgentType\r\n\r\nllm = CustomLLM()\r\ntools = load_tools([\"terminal\",\"python_repl\"])\r\nagent = initialize_agent(tools, llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,verbose=True,handle_parsing_errors=True)\r\nagent.run(\"Please install ffmpeg\")\r\n```\r\n\r\nWhat i would like to see:\r\n```\r\nfrom langchain.agents import load_tools,initialize_agent,AgentType\r\nfrom langchain.llms import HuggingFacePipeline\r\nfrom transformers import pipeline\r\n\r\npipe = pipeline(\"text-generation\", model=model, tokenizer=tokenizer,max_new_tokens=256,temperature=0.1,transform=TRANSFORMFUNCTION)\r\nllm = HuggingFacePipeline(pipeline=pipe)\r\ntools = load_tools([\"terminal\",\"python_repl\"])\r\nagent = initialize_agent(tools, llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,verbose=True,handle_parsing_errors=True)\r\nagent.run(\"Please install ffmpeg\")\r\n```",
"I think your custom LLM is perfectly fine imo.\r\n\r\nYou have other ways to define it actually. Introducting `transform` function would bloat everything up IMHO and what you really need to do is just send a preprompted text to the pipelines like what you are doing. The `stop` part can also be implemented with `generate` arguments I think. (`stop_sequence`)\r\n\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/text_generation.py#L138",
"The it's finished. "
] | 1,684 | 1,684 | 1,684 |
NONE
| null |
### Feature request
Models often work best with specific prompt template.And sometimes [prefix](https://huggingface.co/docs/transformers/v4.29.1/en/main_classes/configuration#transformers.PretrainedConfig.prefix) is not enough.
An ability to pass function to transform prompt would be excellent.
Example
```
def transform_prompt(prompt:str) -> str:
return f"""
### Instruction: {prompt}
### Assistant:
"""
pipeline = TextGenerationPipeline(model=model, tokenizer=tokenizer,max_new_tokens=256,temperature=0.1,transform=transform_prompt)
```
Providing text template is also an option.But in my belief it would be more holistic approach.
### Motivation
I am a little frustrated with the need to define CustomLLM in langchain in order to accomodate the model prompt template. And i believe thats not the only use case for it. Many models require special prompt template.
### Your contribution
I am sorry but no. I am not python programmer.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23470/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23470/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23469
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23469/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23469/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23469/events
|
https://github.com/huggingface/transformers/issues/23469
| 1,717,107,612 |
I_kwDOCUB6oc5mWPuc
| 23,469 |
[Agents and Tools] Custom tool not showing up in the toolbox list
|
{
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @sgugger @LysandreJik ",
"Thanks for the report! I can confirm that `load_tool` does not properly set the name as described in the `tool_config.json`. One workaround is to implement it properly in the class of your tool (setting the `name` attribute like you did for the description) but the tool config should probably override the non-defined attribute. Will work on a fix this morning!",
"Thank you!\r\n\r\nFor the custom tool, I referred to https://huggingface.co/spaces/huggingface-tools/text-to-image/blob/main/text_to_image.py#L14 and saw it didn't also assign the `name` member. ",
"Yes but this one is in the default tools, so is loaded differently. Bug should be fixed soon in any case.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,687 | 1,687 |
MEMBER
| null |
I have coded an inpainter tool:
https://huggingface.co/spaces/sayakpaul/inpainting-tool/
Then I load the tool as follows:
```py
from transformers import load_tool
inpainter = load_tool("sayakpaul/inpainting-tool")
```
When running `print(f"Description: '{inpainter.description}'")`, it shows the output as expected:
> Description: 'This is a tool that inpaints some parts of an image StableDiffusionInpaintPipeline according to a prompt. It takes three inputs: `image`, which should be the original image which will be inpainted, `mask_image`, which should be used to determine which parts of the original image (stored in the `image` variable) should be inpainted, and `prompt`, which should be the prompt to use to guide the inpainting process. It returns the inpainted image.'
Then, I try to add this tool to the list of existing tools:
```py
from transformers import HfAgent
tools = [inpainter]
agent = HfAgent(
"https://api-inference.huggingface.co/models/bigcode/starcoder",
additional_tools=tools
)
```
However, the tool is not added to the toolkit (it just leaves a bullet point):
```py
print("\n".join([f"- {a}" for a in agent.toolbox.keys()]))
```
```
- document_qa
- image_captioner
- image_qa
- image_segmenter
- transcriber
- summarizer
- text_classifier
- text_qa
- text_reader
- translator
- image_transformer
- text_downloader
- image_generator
- video_generator
-
```
As a result, when I run:
```py
image = agent.run(
"Inpaint the image: 'a cute dinosaur'",
image=orig_image,
mask_image=mask_image,
return_code=True
)
```
this is the code that gets generated:
```
==Explanation from the agent==
I will use the following tools: `image_transformer` to transform the image, then `image_segmenter` to create a mask, then `image_transformer` to inpaint the image.
==Code generated by the agent==
prompt = "a cute dinosaur"
image = image_transformer(image, prompt)
mask = image_segmenter(image, prompt)
inpainted_image = image_transformer(image, prompt, mask)
```
As we can see, there's no mention of the `image_inpainter` here.
Anything am I missing out on?
Here's my Colab Notebook for reproduction: https://colab.research.google.com/drive/1BuNz2-7ePeaRaeI7yNc3kqDOzfdXOsUE?usp=sharing
I followed this guide during the process:
https://huggingface.co/docs/transformers/custom_tools#using-custom-tools
`transformers-cli env` gives:
```bash
- `transformers` version: 4.29.2
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23469/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23469/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23468
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23468/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23468/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23468/events
|
https://github.com/huggingface/transformers/pull/23468
| 1,717,063,092 |
PR_kwDOCUB6oc5Q4e9r
| 23,468 |
[`RWKV`] Rwkv fix for 8bit inference
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"^ for the snippet in the description, could you add what's printed out please? :) ",
"_The documentation is not available anymore as the PR was closed or merged._",
"Waiting For This PR to Merge...\r\n\r\nMerge this PR ASAP!",
"@amyeroberts Please Review this PR Fast!",
"Thanks! Just updated the comment! ",
"Thanks You So Much @younesbelkada @amyeroberts For Your Work...\r\n\r\nHope it Works without getting into another problem now :)"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #23467
RWKV architecture scales down some linear layer weights with a certain factor at various stages for inference and training.
In the case of 8bit models, this leads to an error because the weight matrix is now an `int8` matrix. Therefore to apply the scaling one needs to scale the quantization statistics that are stored inside the `SCB` attribute.
cc @amyeroberts
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
import torch
model_id = "RWKV/rwkv-4-1b5-pile"
model = AutoModelForCausalLM.from_pretrained(model_id, load_in_8bit=True, device_map={"":0})
tokenizer = AutoTokenizer.from_pretrained(model_id)
generation_config = GenerationConfig(max_new_tokens=20, pad_token_id=tokenizer.eos_token_id)
question = "Hello my name is"
inputs = tokenizer(question, return_tensors="pt").to(0)
output_int8 = model.generate((inputs["input_ids"]), generation_config=generation_config)
print(tokenizer.decode(output_int8[0], skip_special_tokens=True))
>>> Hello my name is John and I am a student at the University of Texas at Austin. I am a member of the
model_fp16 = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map={"":1})
output_fp16 = model_fp16.generate((inputs["input_ids"]), generation_config=generation_config)
print(tokenizer.decode(output_fp16[0], skip_special_tokens=True))
>>> Hello my name is John and I am a student at the University of South Florida. I am a member of the Alpha
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23468/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23468/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23468",
"html_url": "https://github.com/huggingface/transformers/pull/23468",
"diff_url": "https://github.com/huggingface/transformers/pull/23468.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23468.patch",
"merged_at": 1684505545000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23467
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23467/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23467/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23467/events
|
https://github.com/huggingface/transformers/issues/23467
| 1,716,980,493 |
I_kwDOCUB6oc5mVwsN
| 23,467 |
RuntimeError: result type Float can't be cast to the desired output type Char
|
{
"login": "TheFaheem",
"id": 104909089,
"node_id": "U_kgDOBkDJIQ",
"avatar_url": "https://avatars.githubusercontent.com/u/104909089?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TheFaheem",
"html_url": "https://github.com/TheFaheem",
"followers_url": "https://api.github.com/users/TheFaheem/followers",
"following_url": "https://api.github.com/users/TheFaheem/following{/other_user}",
"gists_url": "https://api.github.com/users/TheFaheem/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TheFaheem/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TheFaheem/subscriptions",
"organizations_url": "https://api.github.com/users/TheFaheem/orgs",
"repos_url": "https://api.github.com/users/TheFaheem/repos",
"events_url": "https://api.github.com/users/TheFaheem/events{/privacy}",
"received_events_url": "https://api.github.com/users/TheFaheem/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @younesbelkada regarding 8bit loading ",
"Hi @TheFaheem \r\nThanks for the issue, it should be fixed in #23468",
"> Hi @TheFaheem Thanks for the issue, it should be fixed in #23468\r\n\r\nYo! Thanks Man. I Thought This Issue Takes Days To Get to Your Eyes.\r\n\r\nThanks For Your Lightning Speed Fix.\r\n\r\nWaiting For That PR to Get Merged...",
"> cc @younesbelkada regarding 8bit loading\r\n\r\nReview That PR ASAP!",
"@TheFaheem We understand that you want the issue resolved as soon as possible, but many of us working on the repo are busy and have many pieces of work to attend to. Spamming messages here and on the PR won't get the PR merged quicker, and isn't sustainable behaviour: if everyone does this then we're unable to meaningfully address notifications. ",
"I Apologise for my Impatience and for interrupting you. Thanks for Your Work For the Community!"
] | 1,684 | 1,684 | 1,684 |
NONE
| null |
### System Info
Colab Configuration:
- `transformers` version: 4.30.0.dev0
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker @gante @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I Ran The Official Code Example:
```
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
import torch
model_id = "RWKV/rwkv-raven-1b5"
model = AutoModelForCausalLM.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model.eval()
if torch.__version__ >= "2":
torch.compile(model)
generation_config = GenerationConfig(max_new_tokens=1000, temperature=0.7, top_k=35, top_p=0.90, pad_token_id= tokenizer.eos_token_id)
question = "Write me a Poem About NLP"
prompt = f"### Instruction: {question}\n### Response:"
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate((inputs["input_ids"]), generation_config=generation_config)
print(output)
```
It Works Fine!
I Ran the same code with some additional args in from_pretrained() func when initialising the model:
```
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
import torch
model_id = "RWKV/rwkv-raven-1b5"
model = AutoModelForCausalLM.from_pretrained(model_id, low_cpu_mem_usage=True, load_in_8bit=True, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_id)
model.eval()
if torch.__version__ >= "2":
torch.compile(model)
generation_config = GenerationConfig(max_new_tokens=1000, temperature=0.7, top_k=35, top_p=0.90, pad_token_id= tokenizer.eos_token_id)
question = "Tell me How RWKV RNNs are Parallelizable"
prompt = f"### Instruction: {question}\n### Response:"
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate((inputs["input_ids"]), generation_config=generation_config)
print(output)
```
But When I Ran This Code, I Got The Following Error:
```
/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1448: UserWarning: You are calling .generate() with the `input_ids` being on a device type different than your model's device. `input_ids` is on cpu, whereas the model is on cuda. You may experience unexpected behaviors or slower generation. Please make sure that you have put `input_ids` to the correct device by calling for example input_ids = input_ids.to('cuda') before running `.generate()`.
warnings.warn(
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ in <cell line: 7>:7 │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py:115 in decorate_context │
│ │
│ 112 │ @functools.wraps(func) │
│ 113 │ def decorate_context(*args, **kwargs): │
│ 114 │ │ with ctx_factory(): │
│ ❱ 115 │ │ │ return func(*args, **kwargs) │
│ 116 │ │
│ 117 │ return decorate_context │
│ 118 │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1518 in generate │
│ │
│ 1515 │ │ │ │ ) │
│ 1516 │ │ │ │
│ 1517 │ │ │ # 11. run greedy search │
│ ❱ 1518 │ │ │ return self.greedy_search( │
│ 1519 │ │ │ │ input_ids, │
│ 1520 │ │ │ │ logits_processor=logits_processor, │
│ 1521 │ │ │ │ stopping_criteria=stopping_criteria, │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:2335 in greedy_search │
│ │
│ 2332 │ │ │ model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) │
│ 2333 │ │ │ │
│ 2334 │ │ │ # forward pass to get next token │
│ ❱ 2335 │ │ │ outputs = self( │
│ 2336 │ │ │ │ **model_inputs, │
│ 2337 │ │ │ │ return_dict=True, │
│ 2338 │ │ │ │ output_attentions=output_attentions, │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in _call_impl │
│ │
│ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │
│ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │
│ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │
│ 1502 │ │ # Do not call functions when jit is used │
│ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1504 │ │ backward_pre_hooks = [] │
│ │
│ /usr/local/lib/python3.10/dist-packages/accelerate/hooks.py:165 in new_forward │
│ │
│ 162 │ │ │ with torch.no_grad(): │
│ 163 │ │ │ │ output = old_forward(*args, **kwargs) │
│ 164 │ │ else: │
│ ❱ 165 │ │ │ output = old_forward(*args, **kwargs) │
│ 166 │ │ return module._hf_hook.post_forward(module, output) │
│ 167 │ │
│ 168 │ module.forward = new_forward │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/models/rwkv/modeling_rwkv.py:780 in forward │
│ │
│ 777 │ │ """ │
│ 778 │ │ return_dict = return_dict if return_dict is not None else self.config.use_return │
│ 779 │ │ │
│ ❱ 780 │ │ rwkv_outputs = self.rwkv( │
│ 781 │ │ │ input_ids, │
│ 782 │ │ │ inputs_embeds=inputs_embeds, │
│ 783 │ │ │ state=state, │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in _call_impl │
│ │
│ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │
│ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │
│ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │
│ 1502 │ │ # Do not call functions when jit is used │
│ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1504 │ │ backward_pre_hooks = [] │
│ │
│ /usr/local/lib/python3.10/dist-packages/accelerate/hooks.py:165 in new_forward │
│ │
│ 162 │ │ │ with torch.no_grad(): │
│ 163 │ │ │ │ output = old_forward(*args, **kwargs) │
│ 164 │ │ else: │
│ ❱ 165 │ │ │ output = old_forward(*args, **kwargs) │
│ 166 │ │ return module._hf_hook.post_forward(module, output) │
│ 167 │ │
│ 168 │ module.forward = new_forward │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/models/rwkv/modeling_rwkv.py:645 in forward │
│ │
│ 642 │ │ return_dict = return_dict if return_dict is not None else self.config.use_return │
│ 643 │ │ │
│ 644 │ │ if self.training == self.layers_are_rescaled: │
│ ❱ 645 │ │ │ self._rescale_layers() │
│ 646 │ │ │
│ 647 │ │ if input_ids is not None and inputs_embeds is not None: │
│ 648 │ │ │ raise ValueError("You cannot specify both input_ids and inputs_embeds at the │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/models/rwkv/modeling_rwkv.py:712 in │
│ _rescale_layers │
│ │
│ 709 │ │ │ │ │ │ block.attention.output.weight.mul_(2 ** int(block_id // self.con │
│ 710 │ │ │ │ │ │ block.feed_forward.value.weight.mul_(2 ** int(block_id // self.c │
│ 711 │ │ │ │ │ else: │
│ ❱ 712 │ │ │ │ │ │ block.attention.output.weight.div_(2 ** int(block_id // self.con │
│ 713 │ │ │ │ │ │ block.feed_forward.value.weight.div_(2 ** int(block_id // self.c │
│ 714 │ │ │
│ 715 │ │ self.layers_are_rescaled = not self.training │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: result type Float can't be cast to the desired output type Char
```
I Tried So Many Ways to Address This, But Nothing Works.
But When I Run This Model Initializing code:
```model = AutoModelForCausalLM.from_pretrained(model_id)```
...without loading it in 8bits, and other args. it Works Fine.
So i guess There Should be Bug in rwkv modelling Code Which Prevents Generating Output, when loaded in 8bit and with some args(You Can See it in Above code snippets).
Correct Me If I were Wrong or Please fix it ASAP.
Who Can Help?
@ArthurZucker @gante @sgugger
### Expected behavior
I Expected it Generate Text as it Generate Before!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23467/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/23467/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23466
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23466/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23466/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23466/events
|
https://github.com/huggingface/transformers/issues/23466
| 1,716,975,879 |
I_kwDOCUB6oc5mVvkH
| 23,466 |
RuntimeError: result type Float can't be cast to the desired output type Char
|
{
"login": "TheFaheem",
"id": 104909089,
"node_id": "U_kgDOBkDJIQ",
"avatar_url": "https://avatars.githubusercontent.com/u/104909089?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TheFaheem",
"html_url": "https://github.com/TheFaheem",
"followers_url": "https://api.github.com/users/TheFaheem/followers",
"following_url": "https://api.github.com/users/TheFaheem/following{/other_user}",
"gists_url": "https://api.github.com/users/TheFaheem/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TheFaheem/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TheFaheem/subscriptions",
"organizations_url": "https://api.github.com/users/TheFaheem/orgs",
"repos_url": "https://api.github.com/users/TheFaheem/repos",
"events_url": "https://api.github.com/users/TheFaheem/events{/privacy}",
"received_events_url": "https://api.github.com/users/TheFaheem/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Closing as it's exact copy of #23467 ",
"Sorry Happened By mistake"
] | 1,684 | 1,684 | 1,684 |
NONE
| null |
### System Info
My System Info:
- `transformers` version: 4.30.0.dev0
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I Ran The Official Code Example:
```
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
import torch
model_id = "RWKV/rwkv-raven-1b5"
model = AutoModelForCausalLM.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model.eval()
if torch.__version__ >= "2":
torch.compile(model)
generation_config = GenerationConfig(max_new_tokens=1000, temperature=0.7, top_k=35, top_p=0.90, pad_token_id= tokenizer.eos_token_id)
question = "Write me a Poem About NLP"
prompt = f"### Instruction: {question}\n### Response:"
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate((inputs["input_ids"]), generation_config=generation_config)
print(output)
```
It Works Fine!
I Ran the same code with some additional args in from_pretrained() func when initialising the model:
```
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
import torch
model_id = "RWKV/rwkv-raven-1b5"
model = AutoModelForCausalLM.from_pretrained(model_id, low_cpu_mem_usage=True, load_in_8bit=True, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_id)
model.eval()
if torch.__version__ >= "2":
torch.compile(model)
generation_config = GenerationConfig(max_new_tokens=1000, temperature=0.7, top_k=35, top_p=0.90, pad_token_id= tokenizer.eos_token_id)
question = "Tell me How RWKV RNNs are Parallelizable"
prompt = f"### Instruction: {question}\n### Response:"
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate((inputs["input_ids"]), generation_config=generation_config)
print(output)
```
But When I Ran This Code, I Got The Following Error:
```
/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1448: UserWarning: You are calling .generate() with the `input_ids` being on a device type different than your model's device. `input_ids` is on cpu, whereas the model is on cuda. You may experience unexpected behaviors or slower generation. Please make sure that you have put `input_ids` to the correct device by calling for example input_ids = input_ids.to('cuda') before running `.generate()`.
warnings.warn(
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ in <cell line: 7>:7 │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py:115 in decorate_context │
│ │
│ 112 │ @functools.wraps(func) │
│ 113 │ def decorate_context(*args, **kwargs): │
│ 114 │ │ with ctx_factory(): │
│ ❱ 115 │ │ │ return func(*args, **kwargs) │
│ 116 │ │
│ 117 │ return decorate_context │
│ 118 │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1518 in generate │
│ │
│ 1515 │ │ │ │ ) │
│ 1516 │ │ │ │
│ 1517 │ │ │ # 11. run greedy search │
│ ❱ 1518 │ │ │ return self.greedy_search( │
│ 1519 │ │ │ │ input_ids, │
│ 1520 │ │ │ │ logits_processor=logits_processor, │
│ 1521 │ │ │ │ stopping_criteria=stopping_criteria, │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:2335 in greedy_search │
│ │
│ 2332 │ │ │ model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) │
│ 2333 │ │ │ │
│ 2334 │ │ │ # forward pass to get next token │
│ ❱ 2335 │ │ │ outputs = self( │
│ 2336 │ │ │ │ **model_inputs, │
│ 2337 │ │ │ │ return_dict=True, │
│ 2338 │ │ │ │ output_attentions=output_attentions, │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in _call_impl │
│ │
│ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │
│ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │
│ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │
│ 1502 │ │ # Do not call functions when jit is used │
│ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1504 │ │ backward_pre_hooks = [] │
│ │
│ /usr/local/lib/python3.10/dist-packages/accelerate/hooks.py:165 in new_forward │
│ │
│ 162 │ │ │ with torch.no_grad(): │
│ 163 │ │ │ │ output = old_forward(*args, **kwargs) │
│ 164 │ │ else: │
│ ❱ 165 │ │ │ output = old_forward(*args, **kwargs) │
│ 166 │ │ return module._hf_hook.post_forward(module, output) │
│ 167 │ │
│ 168 │ module.forward = new_forward │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/models/rwkv/modeling_rwkv.py:780 in forward │
│ │
│ 777 │ │ """ │
│ 778 │ │ return_dict = return_dict if return_dict is not None else self.config.use_return │
│ 779 │ │ │
│ ❱ 780 │ │ rwkv_outputs = self.rwkv( │
│ 781 │ │ │ input_ids, │
│ 782 │ │ │ inputs_embeds=inputs_embeds, │
│ 783 │ │ │ state=state, │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in _call_impl │
│ │
│ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │
│ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │
│ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │
│ 1502 │ │ # Do not call functions when jit is used │
│ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1504 │ │ backward_pre_hooks = [] │
│ │
│ /usr/local/lib/python3.10/dist-packages/accelerate/hooks.py:165 in new_forward │
│ │
│ 162 │ │ │ with torch.no_grad(): │
│ 163 │ │ │ │ output = old_forward(*args, **kwargs) │
│ 164 │ │ else: │
│ ❱ 165 │ │ │ output = old_forward(*args, **kwargs) │
│ 166 │ │ return module._hf_hook.post_forward(module, output) │
│ 167 │ │
│ 168 │ module.forward = new_forward │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/models/rwkv/modeling_rwkv.py:645 in forward │
│ │
│ 642 │ │ return_dict = return_dict if return_dict is not None else self.config.use_return │
│ 643 │ │ │
│ 644 │ │ if self.training == self.layers_are_rescaled: │
│ ❱ 645 │ │ │ self._rescale_layers() │
│ 646 │ │ │
│ 647 │ │ if input_ids is not None and inputs_embeds is not None: │
│ 648 │ │ │ raise ValueError("You cannot specify both input_ids and inputs_embeds at the │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/models/rwkv/modeling_rwkv.py:712 in │
│ _rescale_layers │
│ │
│ 709 │ │ │ │ │ │ block.attention.output.weight.mul_(2 ** int(block_id // self.con │
│ 710 │ │ │ │ │ │ block.feed_forward.value.weight.mul_(2 ** int(block_id // self.c │
│ 711 │ │ │ │ │ else: │
│ ❱ 712 │ │ │ │ │ │ block.attention.output.weight.div_(2 ** int(block_id // self.con │
│ 713 │ │ │ │ │ │ block.feed_forward.value.weight.div_(2 ** int(block_id // self.c │
│ 714 │ │ │
│ 715 │ │ self.layers_are_rescaled = not self.training │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: result type Float can't be cast to the desired output type Char
```
I Tried So Many Ways to Address This, But Nothing Works.
But When I Run This Model Initializing code:
```model = AutoModelForCausalLM.from_pretrained(model_id)```
...without loading it in 8bits, and other args. it Works Fine.
So i guess There Should be Bug in rwkv modelling Code Which Prevents Generating Output, when loaded in 8bit and with some args(You Can See it in Above code snippets).
Correct Me If I were Wrong or Please fix it ASAP.
Who Can Help?
@ArthurZucker @gante @sgugger
### Expected behavior
I Expected it Generate Text as it Generate Before!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23466/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23466/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23465
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23465/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23465/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23465/events
|
https://github.com/huggingface/transformers/pull/23465
| 1,716,956,904 |
PR_kwDOCUB6oc5Q4HoA
| 23,465 |
Fix confusing `transformers` installation in CI
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23465). All of your documentation changes will be reflected on that endpoint."
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
As mentioned in [this comment](https://github.com/huggingface/transformers/pull/23277#issuecomment-1544080975), the `transformers` in CI runs is not the same one installed during docker image build.
Furthermore, for past CI, we didn't update the docker image daily (there is no need to update the fixed 3rd-packages environment), but the `transformers` code is the latest one being test against. We get
```bash
E ImportError: cannot import name 'HfDoctestModule' from 'transformers.testing_utils'
```
as the `transformers` is the one during docker image build (months ago) which has no `HfDoctestModule` at the time.
To avoid and fix all such super confusing failures in the future, it's better to install the `transformers` again in the edit mode in the workflow files.
I don't change the daily CI workflow file yet in this PR - I would wait this PR showing the change actually fix issues.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23465/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23465/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23465",
"html_url": "https://github.com/huggingface/transformers/pull/23465",
"diff_url": "https://github.com/huggingface/transformers/pull/23465.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23465.patch",
"merged_at": 1684527019000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23464
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23464/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23464/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23464/events
|
https://github.com/huggingface/transformers/issues/23464
| 1,716,902,604 |
I_kwDOCUB6oc5mVdrM
| 23,464 |
Module image_processing_videomae can't be found
|
{
"login": "TheOrange-cmd",
"id": 35196043,
"node_id": "MDQ6VXNlcjM1MTk2MDQz",
"avatar_url": "https://avatars.githubusercontent.com/u/35196043?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TheOrange-cmd",
"html_url": "https://github.com/TheOrange-cmd",
"followers_url": "https://api.github.com/users/TheOrange-cmd/followers",
"following_url": "https://api.github.com/users/TheOrange-cmd/following{/other_user}",
"gists_url": "https://api.github.com/users/TheOrange-cmd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TheOrange-cmd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TheOrange-cmd/subscriptions",
"organizations_url": "https://api.github.com/users/TheOrange-cmd/orgs",
"repos_url": "https://api.github.com/users/TheOrange-cmd/repos",
"events_url": "https://api.github.com/users/TheOrange-cmd/events{/privacy}",
"received_events_url": "https://api.github.com/users/TheOrange-cmd/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @TheOrange-cmd, thanks for reporting this issue. \r\n\r\nIs seems the error is arising with an import of the PIL library. Could you run `pip list | grep Pillow` and share which version of `Pillow` is installed? ",
"Thank you for the quick response. I get some errors with grep; I see it's meant for Linux? I tried installing it but it still doesn't work. If I just run pip list or pip freeze or conda list Pillow I get version 9.4.0 for each command. Conda specifies I have build py38hd77b12b_0. ",
"Hi @TheOrange-cmd, \r\n\r\nYes, sorry, `grep` is a linux command - sometimes I'm so used to writing it out I forget to check about the OS. \r\n\r\nThanks for giving the Pillow version info. It's the same version as I'm running locally, so 9.4.0 should work. Could you try upgrading the running version of `transformers` to the most recent release - 4.29.2? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I had this same problem, with the most recent version of ```transformers```. I solved it by starting a new environment."
] | 1,684 | 1,692 | 1,687 |
NONE
| null |
### System Info
- `transformers` version: 4.22.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.5
- Huggingface_hub version: 0.14.1
- PyTorch version (GPU?): 1.12.1 (False)
- Tensorflow version (GPU?): 2.10.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
When following the tutorial for the TimeSFormer, I run into an issue importing the relevant modules. This is the tutorial:
https://huggingface.co/docs/transformers/main/tasks/video_classification
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from transformers import VideoMAEImageProcessor, VideoMAEForVideoClassification
model_ckpt = "MCG-NJU/videomae-base"
image_processor = VideoMAEImageProcessor.from_pretrained(model_ckpt)
model = VideoMAEForVideoClassification.from_pretrained(
model_ckpt,
label2id=label2id,
id2label=id2label,
ignore_mismatched_sizes=True, # provide this in case you're planning to fine-tune an already fine-tuned checkpoint
)
```
The error:
```
ImportError Traceback (most recent call last)
[d:\Fall_Detection\BAP_fall_detection\.conda\lib\site-packages\transformers\utils\import_utils.py](file:///D:/Fall_Detection/BAP_fall_detection/.conda/lib/site-packages/transformers/utils/import_utils.py) in _get_module(self, module_name)
1171 try:
-> 1172 return importlib.import_module("." + module_name, self.__name__)
1173 except Exception as e:
[d:\Fall_Detection\BAP_fall_detection\.conda\lib\importlib\__init__.py](file:///D:/Fall_Detection/BAP_fall_detection/.conda/lib/importlib/__init__.py) in import_module(name, package)
126 level += 1
--> 127 return _bootstrap._gcd_import(name[level:], package, level)
128
[d:\Fall_Detection\BAP_fall_detection\.conda\lib\importlib\_bootstrap.py](file:///D:/Fall_Detection/BAP_fall_detection/.conda/lib/importlib/_bootstrap.py) in _gcd_import(name, package, level)
[d:\Fall_Detection\BAP_fall_detection\.conda\lib\importlib\_bootstrap.py](file:///D:/Fall_Detection/BAP_fall_detection/.conda/lib/importlib/_bootstrap.py) in _find_and_load(name, import_)
[d:\Fall_Detection\BAP_fall_detection\.conda\lib\importlib\_bootstrap.py](file:///D:/Fall_Detection/BAP_fall_detection/.conda/lib/importlib/_bootstrap.py) in _find_and_load_unlocked(name, import_)
[d:\Fall_Detection\BAP_fall_detection\.conda\lib\importlib\_bootstrap.py](file:///D:/Fall_Detection/BAP_fall_detection/.conda/lib/importlib/_bootstrap.py) in _load_unlocked(spec)
[d:\Fall_Detection\BAP_fall_detection\.conda\lib\importlib\_bootstrap_external.py](file:///D:/Fall_Detection/BAP_fall_detection/.conda/lib/importlib/_bootstrap_external.py) in exec_module(self, module)
[d:\Fall_Detection\BAP_fall_detection\.conda\lib\importlib\_bootstrap.py](file:///D:/Fall_Detection/BAP_fall_detection/.conda/lib/importlib/_bootstrap.py) in _call_with_frames_removed(f, *args, **kwds)
[d:\Fall_Detection\BAP_fall_detection\.conda\lib\site-packages\transformers\models\videomae\image_processing_videomae.py](file:///D:/Fall_Detection/BAP_fall_detection/.conda/lib/site-packages/transformers/models/videomae/image_processing_videomae.py) in
21 from ...image_processing_utils import BaseImageProcessor, BatchFeature, get_size_dict
---> 22 from ...image_transforms import (
23 center_crop,
[d:\Fall_Detection\BAP_fall_detection\.conda\lib\site-packages\transformers\image_transforms.py](file:///D:/Fall_Detection/BAP_fall_detection/.conda/lib/site-packages/transformers/image_transforms.py) in
20
---> 21 from .image_utils import (
22 ChannelDimension,
[d:\Fall_Detection\BAP_fall_detection\.conda\lib\site-packages\transformers\image_utils.py](file:///D:/Fall_Detection/BAP_fall_detection/.conda/lib/site-packages/transformers/image_utils.py) in
43 if is_vision_available():
---> 44 import PIL.Image
45 import PIL.ImageOps
[d:\Fall_Detection\BAP_fall_detection\.conda\lib\site-packages\PIL\Image.py](file:///D:/Fall_Detection/BAP_fall_detection/.conda/lib/site-packages/PIL/Image.py) in
99 # and should be considered private and subject to change.
--> 100 from . import _imaging as core
101
ImportError: DLL load failed: The specified module could not be found.
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
[~\AppData\Local\Temp\ipykernel_10824\1632751044.py](https://file+.vscode-resource.vscode-cdn.net/d%3A/Fall_Detection/BAP_fall_detection/~/AppData/Local/Temp/ipykernel_10824/1632751044.py) in
----> 1 from transformers import VideoMAEImageProcessor, VideoMAEForVideoClassification
2
3 model_ckpt = "MCG-NJU/videomae-base"
4 image_processor = VideoMAEImageProcessor.from_pretrained(model_ckpt)
5 model = VideoMAEForVideoClassification.from_pretrained(
[d:\Fall_Detection\BAP_fall_detection\.conda\lib\importlib\_bootstrap.py](file:///D:/Fall_Detection/BAP_fall_detection/.conda/lib/importlib/_bootstrap.py) in _handle_fromlist(module, fromlist, import_, recursive)
[d:\Fall_Detection\BAP_fall_detection\.conda\lib\site-packages\transformers\utils\import_utils.py](file:///D:/Fall_Detection/BAP_fall_detection/.conda/lib/site-packages/transformers/utils/import_utils.py) in __getattr__(self, name)
1161 elif name in self._class_to_module.keys():
1162 module = self._get_module(self._class_to_module[name])
-> 1163 value = getattr(module, name)
1164 else:
1165 raise AttributeError(f"module {self.__name__} has no attribute {name}")
[d:\Fall_Detection\BAP_fall_detection\.conda\lib\site-packages\transformers\utils\import_utils.py](file:///D:/Fall_Detection/BAP_fall_detection/.conda/lib/site-packages/transformers/utils/import_utils.py) in __getattr__(self, name)
1160 value = self._get_module(name)
1161 elif name in self._class_to_module.keys():
-> 1162 module = self._get_module(self._class_to_module[name])
1163 value = getattr(module, name)
1164 else:
[d:\Fall_Detection\BAP_fall_detection\.conda\lib\site-packages\transformers\utils\import_utils.py](file:///D:/Fall_Detection/BAP_fall_detection/.conda/lib/site-packages/transformers/utils/import_utils.py) in _get_module(self, module_name)
1175 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its"
1176 f" traceback):\n{e}"
-> 1177 ) from e
1178
1179 def __reduce__(self):
RuntimeError: Failed to import transformers.models.videomae.image_processing_videomae because of the following error (look up to see its traceback):
DLL load failed: The specified module could not be found.
```
When I run transformers-cli env:
```
WARNING:tensorflow:From D:\Program Files\Anaconda\lib\site-packages\transformers\commands\env.py:52: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
2023-05-19 10:53:04.292349: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
```
And the version list I gave above.
### Expected behavior
According to the tutorial, the output should be:
Some weights of the model checkpoint at MCG-NJU/videomae-base were not used when initializing VideoMAEForVideoClassification: [..., 'decoder.decoder_layers.1.attention.output.dense.bias', 'decoder.decoder_layers.2.attention.attention.key.weight']
- This IS expected if you are initializing VideoMAEForVideoClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing VideoMAEForVideoClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of VideoMAEForVideoClassification were not initialized from the model checkpoint at MCG-NJU/videomae-base and are newly initialized: ['classifier.bias', 'classifier.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23464/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23464/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23463
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23463/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23463/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23463/events
|
https://github.com/huggingface/transformers/pull/23463
| 1,716,865,674 |
PR_kwDOCUB6oc5Q3z7f
| 23,463 |
Fix `transformers`' DeepSpeed CI job
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Going to merge. cc @stas00 so you know now we are using CUDA 118 on CI (as you mentioned/requested once before)",
"super! thank you for the heads up, @ydshieh!"
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
The new (cu118) base docker image has pre-installed `transformer-engine` (which wasn't the case in the previous base image). This causes DeepSpeed CI job fails from the beginning with
```bash
E ImportError: /usr/local/lib/python3.8/dist-packages/transformer_engine_extensions.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZN3c106detail23torchInternalAssertFailEPKcS2_jS2_RKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
```
This PR uninstall `transformer-engine` so @ydshieh won't be the breaking bad.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23463/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23463/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23463",
"html_url": "https://github.com/huggingface/transformers/pull/23463",
"diff_url": "https://github.com/huggingface/transformers/pull/23463.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23463.patch",
"merged_at": 1684511407000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23462
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23462/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23462/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23462/events
|
https://github.com/huggingface/transformers/issues/23462
| 1,716,834,890 |
I_kwDOCUB6oc5mVNJK
| 23,462 |
CLIPTextModel gives different results for batched vs unbatched
|
{
"login": "ethansmith2000",
"id": 98723285,
"node_id": "U_kgDOBeJl1Q",
"avatar_url": "https://avatars.githubusercontent.com/u/98723285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ethansmith2000",
"html_url": "https://github.com/ethansmith2000",
"followers_url": "https://api.github.com/users/ethansmith2000/followers",
"following_url": "https://api.github.com/users/ethansmith2000/following{/other_user}",
"gists_url": "https://api.github.com/users/ethansmith2000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ethansmith2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ethansmith2000/subscriptions",
"organizations_url": "https://api.github.com/users/ethansmith2000/orgs",
"repos_url": "https://api.github.com/users/ethansmith2000/repos",
"events_url": "https://api.github.com/users/ethansmith2000/events{/privacy}",
"received_events_url": "https://api.github.com/users/ethansmith2000/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"hi @ethansmith2000 \r\nHmm I have also experienced few issue like that in the past with text models in half-precision (float16 mainly), I think that it is expected to have a few numerical differences between batched vs unbatched. Can you try different values for `atol` and `rtol`?\r\n\r\n```python\r\nprint(torch.allclose(outputs, batch_outputs, atol=1e-3, rtol=1e-3))\r\n```\r\nI think the highest \"acceptable\" threshold is something around 1e-3 or slightly above (4e-3)",
"Thanks for getting back to me @younesbelkada, if this is expected behavior, then no worries! Would you know if there are any resources that may explain why this is the case?",
"No worries!\r\nFrom now my observations are purely empirical, and based on my personal experience - would you mind trying your experiment with `float32` as well as `blfloat16` ? The `bfloat16` data format has been introduced so that half precision models can have the same training dynamics to avoid overflow issues, so maybe the results will be slightly better. \r\nIf you want to read more about float16/bfloat16 and you are not familiar with it, I would recommend reading [this datatype section](https://huggingface.co/blog/hf-bitsandbytes-integration#common-data-types-used-in-machine-learning) written by @stas00 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,687 | 1,687 |
NONE
| null |
### System Info
- `transformers` version: 4.28.1
- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.13.4
- Safetensors version: 0.3.0
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): 2.11.1 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: N/A
- Using distributed or parallel set-up in script?: N/A
### Who can help?
```
num = 10
prompts = ['hey']*num
inputs = pipe.tokenizer(prompts, return_tensors='pt', padding='max_length', max_length=77).input_ids.to('cuda') # [10,77]
outputs = pipe.text_encoder(inputs).last_hidden_state
batch_outputs = torch.cat([pipe.text_encoder(inputs[i:i+1]).last_hidden_state for i in range(num)],dim=0)
print(torch.all(torch.isclose(outputs, batch_outputs)).item())
```
>> False
Can't figure out the reason for this. Setting num=1 yields True but for any values >1 it returns False. I've reviewed some of the functions around clip attention and mlp but nothing at a short glance.
<img width="935" alt="Screen Shot 2023-05-19 at 4 08 34 AM" src="https://github.com/huggingface/transformers/assets/98723285/a7f717f9-83c3-4c22-a8b9-5cc5afa4b1d7">
<img width="672" alt="Screen Shot 2023-05-19 at 4 09 03 AM" src="https://github.com/huggingface/transformers/assets/98723285/f0762b27-0739-4399-bdee-96335bbe7d58">
@younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
ran the following code in a notebook
```
import torch
import diffusers
pipe = diffusers.StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5").to("cuda",torch.float16)
num = 10
prompts = ['hey']*num
inputs = pipe.tokenizer(prompts, return_tensors='pt', padding='max_length', max_length=77).input_ids.to('cuda') # [10,77]
outputs = pipe.text_encoder(inputs).last_hidden_state
batch_outputs = torch.cat([pipe.text_encoder(inputs[i:i+1]).last_hidden_state for i in range(num)],dim=0)
print(torch.all(torch.isclose(outputs, batch_outputs)).item())
```
### Expected behavior
the batched and unbatched values should not differ i believe
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23462/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23462/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23461
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23461/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23461/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23461/events
|
https://github.com/huggingface/transformers/issues/23461
| 1,716,821,980 |
I_kwDOCUB6oc5mVJ_c
| 23,461 |
[BUG] `current_segment_id` should increment from largest segment id.
|
{
"login": "jimmysue",
"id": 29350169,
"node_id": "MDQ6VXNlcjI5MzUwMTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/29350169?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jimmysue",
"html_url": "https://github.com/jimmysue",
"followers_url": "https://api.github.com/users/jimmysue/followers",
"following_url": "https://api.github.com/users/jimmysue/following{/other_user}",
"gists_url": "https://api.github.com/users/jimmysue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jimmysue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jimmysue/subscriptions",
"organizations_url": "https://api.github.com/users/jimmysue/orgs",
"repos_url": "https://api.github.com/users/jimmysue/repos",
"events_url": "https://api.github.com/users/jimmysue/events{/privacy}",
"received_events_url": "https://api.github.com/users/jimmysue/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi @jimmysue, thanks for raising this issue! \r\n\r\nFor the segment ids, they increment from 0 -> total number of segments for each image. This is expected for segmentation like panoptic, as we want each individual instance to have its own segment ID assigned e.g. segment 0 -> car 0, segment 1 -> car 1, segment 3 -> sky etc. \r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,693 | 1,693 |
NONE
| null |
Code below increment segment id from last iteration segment id, which is wrong. Correct segment id should increment from the latest(which is the largest so far) segment id.
https://github.com/huggingface/transformers/blob/a7920065f2cfd2549b838f9a30afd7c265fcdd88/src/transformers/models/mask2former/image_processing_mask2former.py#L234-L237
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23461/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23461/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23460
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23460/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23460/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23460/events
|
https://github.com/huggingface/transformers/pull/23460
| 1,716,748,309 |
PR_kwDOCUB6oc5Q3axe
| 23,460 |
Add InstructBLIP
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you for your contribution! I noticed a potential problem with this open PR. It seems that the InstructBLIP processor is missing the QformerTokenizer compared to the BLIP2Processor. ",
"Thanks for your review. Updates:\r\n\r\n- all `autocast` logic was removed, turns out the implementation returns the same exact logits as the original implementation when also using `float32` for the original implementation. However, we may need to think about supporting various dtypes of building blocks of a model, cause if you'd do `from_pretrained(\"...\", dtype=torch.float16\")`, that would break for the Flan-T5 checkpoints, which require `bfloat16`. It would be nice to provide the possibility to load the vision encoder in `float16` and the language model in `bfloat16`.\r\n- The `InstructBlipProcessor` is a bit different than other processors in the sense that it consists of 1 image processor and 2 tokenizers (one for the language model, one for the Q-Former). I've included logic to save the Q-Former tokenizer files in a separate folder on the hub as can be seen [here](https://huggingface.co/nielsr/instructblip-vicuna-7b/tree/main), and had to overwrite the `from_pretrained` and `save_pretrained` methods to make this work. I know that this logic may need to be addressed in a separate PR.",
"Will the converted weights be hosted on the model hub like blip-2?",
"All checkpoints are transferred: https://huggingface.co/models?other=instructblip.\r\n\r\nFeel free to merge the PR.\r\n\r\nThe only thing left is uploading fast tokenizer files for the Vicuna-based checkpoints, but that can only be done once https://github.com/huggingface/transformers/issues/23889 is fixed. Currently the fast tokenizer is created on-the-fly based on the slow tokenizer files when loading from the hub.\r\n\r\nUpdate: that's now also done, so it's entirely ready",
"@amyeroberts Could you have a final look and merge if you are happy?",
"> There's InstructBlipTextModelTester but no InstructBlipTextModelTest \r\n\r\nIn general, I would say yes to have 1-1 correspondence. But I don't want to make it strict if it doesn't really bring anything valuable.\r\n\r\nThe pipeline testing script would be easier if we have such correspondence, but since I was able to manage BLIP2 already, and this test file here is similar to BLIP2, I think it's fine.\r\n\r\n> and some tests for InstructBlipModel are skipped because they're run in individual model tests.\r\n\r\nIt's same as CLIP test file, so it's OK :-)\r\n\r\n",
"@ydshieh Thanks for reviewing & info about the tests! \r\n\r\n> >and some tests for InstructBlipModel are skipped because they're run in individual model tests.\r\n> It's same as CLIP test file, so it's OK :-)\r\n\r\nAh, sorry, I wasn't clear. What I meant was: if tests are skipped with the reason of being already tested in individual model tests, don't we need the modular tests classes implemented i.e. `InstructBlipTextModelTest`? ",
"> Ah, sorry, I wasn't clear. What I meant was: if tests are skipped with the reason of being already tested in individual model tests, don't we need the modular tests classes implemented i.e. InstructBlipTextModelTest?\r\n\r\nI agree (was thinking the same but my mind is lost in my reply).\r\n\r\n@NielsRogge I will let you to express why there is no text model test class :-), which is same as in BLIP2.\r\n\r\nWell, after looking a bit, the text part is not a fixed model class\r\n```\r\n if config.use_decoder_only_language_model:\r\n language_model = AutoModelForCausalLM.from_config(config.text_config)\r\n else:\r\n language_model = AutoModelForSeq2SeqLM.from_config(config.text_config)\r\n```\r\nI think that's the main reason why we don't have the test for that part.\r\n",
"Hi, will this land soon? I would love to try out this model. Thanks!",
"Thanks @amyeroberts for your review, there was a bug with `LlamaTokenizerFast` that has now been fixed, now the absolute tolerance is much lower (1e-4 and 1e-5).\r\n\r\nI've removed `InstructBlipModel` from this PR as that was copied from `Blip2Model` using the CookieCutter template. The latter was added in this PR: #21817. However I'm not sure why the latter got approved, cause it's not really in lign with the design of the library, meaning that `xxxModel` are models not including any head on top and not accepting a `labels` argument. However `Blip2Model` seems like an entire copy of `Blip2ForConditionalGeneration`, which seems odd to me.",
"Do the prompt need further packaging when inference? For example, BLIP2 use \"Question: {prompt}? Answer: \" as prompt. And which type of prompt be used in InstructBLIP? Or we only use question to ask the model?",
"@NielsRogge It appears in the current diff that there a some changes unrelated to this PR? Could you rebase to sync up with `main`? Could you also respond to the questions in the PR review instead of just marking as resolved? ",
"Well 💚 ",
"Merge it now as 🟢 is approved.",
"Hi @zdxff there's no specific prompt being used for InstructBLIP. You can just ask it questions like \"What is unusual about this image?\"",
"Will work on the 8bit / 4bit integration ASAP !\r\n\r\nEDIT: here you go https://github.com/huggingface/transformers/pull/24488 "
] | 1,684 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds [InstructBLIP](https://github.com/salesforce/LAVIS/tree/main/projects/instructblip), a visual instruction tuned version of [BLIP-2](https://huggingface.co/docs/transformers/main/model_doc/blip-2).
It's a bit like an open-source multimodal GPT-4, leveraging Flan-T5 and Vicuna pre-trained checkpoints.
Basic usage is as follows:
```
from transformers import InstructBlipProcessor, InstructBlipForConditionalGeneration
import torch
from PIL import Image
import requests
model = InstructBlipForConditionalGeneration.from_pretrained("...")
processor = InstructBlipProcessor.from_pretrained("...")
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
url = "https://raw.githubusercontent.com/salesforce/LAVIS/main/docs/_static/Confusing-Pictures.jpg"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
prompt = "What is unusual about this image?"
inputs = processor(images=image, text=prompt, return_tensors="pt")
outputs = model.generate(
**inputs,
do_sample=False,
num_beams=1,
max_length=256,
min_length=1,
top_p=0.9,
repetition_penalty=1.5,
length_penalty=1.0,
temperature=1,
)
generated_text = processor.batch_decode(outputs, skip_special_tokens=True)[0].strip()
print(generated_text)
```
To do:
- [x] discuss whether to integrate the `QFormerTokenizer` into the processor
- [x] integration tests
- [x] figure out the the best way to handle the various dtypes of the vision encoder and language model
Nice to haves:
- [ ] doc tests
- [ ] int8 support (cc @younesbelkada)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23460/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23460",
"html_url": "https://github.com/huggingface/transformers/pull/23460",
"diff_url": "https://github.com/huggingface/transformers/pull/23460.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23460.patch",
"merged_at": 1687771438000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23459
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23459/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23459/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23459/events
|
https://github.com/huggingface/transformers/issues/23459
| 1,716,733,008 |
I_kwDOCUB6oc5mU0RQ
| 23,459 |
”never_split“ not working on BertTokenizer
|
{
"login": "lllyyyqqq",
"id": 48902561,
"node_id": "MDQ6VXNlcjQ4OTAyNTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/48902561?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lllyyyqqq",
"html_url": "https://github.com/lllyyyqqq",
"followers_url": "https://api.github.com/users/lllyyyqqq/followers",
"following_url": "https://api.github.com/users/lllyyyqqq/following{/other_user}",
"gists_url": "https://api.github.com/users/lllyyyqqq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lllyyyqqq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lllyyyqqq/subscriptions",
"organizations_url": "https://api.github.com/users/lllyyyqqq/orgs",
"repos_url": "https://api.github.com/users/lllyyyqqq/repos",
"events_url": "https://api.github.com/users/lllyyyqqq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lllyyyqqq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] |
closed
| false | null |
[] |
[
"cc @ArthurZucker @younesbelkada ",
"The '[' or ']' in BertTokenizer is punctuation, it will be split at first. And the `outline` or `[outline]` is not in vocab, its will be set UNK. It doesn't seem to make sense anymore.\r\nLook the code: https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/tokenization_bert.py#L446\r\n\r\n",
"> The '[' or ']' in BertTokenizer is punctuation, it will be split at first. And the `outline` or `[outline]` is not in vocab, its will be set UNK. It doesn't seem to make sense anymore. Look the code: https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/tokenization_bert.py#L446 \r\n\r\nThanks for replying. As stated before, I am using my own vocab, and ’[outline]‘ is in it,\r\n\r\ntokenizer = BertTokenizer.from_pretrained(my_vocab_path, never_split='[outline]')\r\nprint(tokenizer.convert_tokens_to_ids('[outline]'))\r\nprint(tokenizer.convert_tokens_to_ids('。'))\r\nprint(tokenizer.tokenize('。[outline]'))\r\n\r\n\r\n",
"Hey, reading the doc for the `BertTokenizer`, you should be using the `do_basic_tokenize=True` argument, as mentioned [here](https://github.com/ArthurZucker/transformers/blob/f732a643ab47a324405dc583532bbbfc45e2d8dc/src/transformers/models/bert/tokenization_bert.py#L153). ",
"> Hey, reading the doc for the `BertTokenizer`, you should be using the `do_basic_tokenize=True` argument, as mentioned [here](https://github.com/ArthurZucker/transformers/blob/f732a643ab47a324405dc583532bbbfc45e2d8dc/src/transformers/models/bert/tokenization_bert.py#L153).\r\n\r\nYour link is broken, it says '404 - page not found'?\r\nPlus, `do_basic_tokenize=True` is default setting. Even if I add it intentionally, the result stays the same.\r\n\r\ntokenizer = BertTokenizer.from_pretrained(my_vocab_path, never_split=['[outline]'], do_basic_tokenize=True)\r\nprint(tokenizer.tokenize('。[outline]')) # ['。', '[', 'out', '##line', ']']\r\n\r\nCorrect me if I do anything wrong.",
"Sorry, anyway the argument was set to `True` by default so that's not the problem. \r\nLet's me investigate, in the mean time doing `tokenizer.add_token(\"[outline]\", special_token = True)`\" should (I think) prevent it from being split",
"( the doc mentions : \r\n```python\r\n never_split (`List[str]`, *optional*)\r\n Kept for backward compatibility purposes. Now implemented directly at the base class level (see\r\n [`PreTrainedTokenizer.tokenize`]) List of token not to split.\r\n```",
"The best solution is to add the token to the list of special tokens using the `add_token` method",
"Yeah, add it as special_token does take care of the splitting problem. But in the latter process, I will decode with argument `skip_special_tokens=True`. Then the token will be skipped, while I don't want it be. For now, I add it to the special token list, but I still suggest fixing the `never_split` argument.\r\n",
"Then that means that the token that you want to add is not `special`. I think that if you add it without the `special_token` set to `True` it should not be spilt no? ",
"Without `special_token` set to True, it will be splitted.",
"No it won't : \r\n```python \r\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-uncased\", use_fast=False)\r\ntokenizer.add_tokens(\"[outline]\")\r\ntokenizer.added_tokens_encoder\r\n>>> {'[outline]': 30522}\r\n\r\ntokenizer.encode(\"[outline]\")\r\n>>> [101, 30522, 102]\r\ntokenizer.decode(tokenizer.encode(\"[outline]\"))\r\n>>> '[CLS] [outline] [SEP]'\r\nprint(tokenizer.tokenize(\". [outline]\"))\r\n>>> ['.', '[outline]']\r\n\r\ntokenizer.decode(tokenizer.encode(\". [outline]\"), skip_special_tokens=True)\r\n>>> '. [outline]'",
"In your case, it won't. But I am using a different vocab.txt, it splits.",
"Seems like `'[outline]'` will not be added anyway, since it's already in the vocab.",
"I don't understand. You have a very specific usage, where you don't want to split `[outline]` that is already in your vocab. \r\nThe basic tokenizer works as expected: `tokenizer.basic_tokenizer.tokenize(\"[outline]\")` will not split it. \r\nWhen you are calling `tokenize` on the `BertTokenizerClass` the `_tokenize` function is then called, which relies on the `all_special_ids`. That means that the token should be added to both lists. \r\n```python \r\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-uncased\", use_fast=False, never_split= [\"outline\"])\r\ntokenizer.add_tokens(\"[outline]\")\r\n```\r\nI am guessing that this should work\r\n",
"Edit: I manually added \"[outline]\" to my vocab and it worked for both the solution I gave you",
"Unfortunately, it still doesn't work on my vocab. I think it strictly related to the vocab. So far, only adding it to the special tokens works for me. \r\nAlso, shat I posted before is basic tokenizer split it, `tokenizer.basic_tokenizer.tokenize(\"[outline]\")` splits it into` '[', 'outline', ']'`. The tokenizer then send the split tokens to do Wordpiece instead of fix it to the origin `'[outline]'`. I think that may be the reason. ",
"[vocab.txt](https://github.com/huggingface/transformers/files/11564244/vocab.txt)\r\nHere is my vocab, you can try on it.",
"I tried loading a tokenizer using your vocabulary and I cannot reproduce your issue.\r\nTry downloading the latests `transformer` version!",
"Why......\r\nI've updated transformers to 4.29.2, still the same result....\r\nhere is my code\r\n\r\n```python\r\ntokenizer = BertTokenizer.from_pretrained('../base_model/vocab.txt', never_split= [\"[outline]\"])\r\ntokenizer.add_tokens(\"[outline]\")\r\nprint(tokenizer.tokenize(\"。[outline]\"))\r\n# ['。', '[', 'out', '##line', ']']\r\n``` \r\n",
"Can you try `tokenizer = BertTokenizer.from_pretrained('../base_model', never_split= [\"[outline]\"])`\r\nAlso I would suggest you create a colab , this will make sure that your cache is not messing with this. ",
"Here is the Colab result:\r\n\r\n",
"can you share a link to the colab, I'll try to reproduce and modify a copy 😉 \r\n",
"Also you did not add the token using `add_token(..., special_token = False)`",
"Another solution is to initialise the tokenizer using `...from_pretrained( path, additional_special_tokens = [\"[outline]\"]) ` ",
"https://colab.research.google.com/drive/1EStD5K_lQM0-PgMUQ8z273TzAUcgY2IV?usp=sharing\r\nYou need to roll down to the bottom to see the code, `add_token(...)` already added.\r\n\r\n`additional_special_tokens` add `[outline]` into special tokens too, so it works fine. But it still meets the `skip_special_token` problem. Anyway, this issue is about 'never_split' argument not working, so let's focus on this.",
"Thanks a lot. \r\nIndeed the token is not added, because in the `_add_token` a check prevent it to be added if it is already in the vocab. \r\nWorkaround: \r\n```python \r\ntokenizer.added_tokens_encoder.update({\"[outline]\":85})\r\ntokenizer.added_tokens_decoder.update({85:\"[outline]\"})\r\ntokenizer.unique_no_split_tokens = sorted(set(tokenizer.unique_no_split_tokens).union({\"[outline]\"})) \r\ntokenizer._create_trie(tokenizer.unique_no_split_tokens)\r\n```\r\nIts is not really elegant indeed. Also adding a token means that whether or not it is in the vocab, we want it to be in the added tokens, so I think it makes sense to add it, even if it exists. WDYT @Narsil \r\nedit: I think it comes down to a choice, and both could have pos and cons.",
"About never split, the last commit is 4 years old, it has never been touch, and I'd rather we find a way to work around your problem using new code rather than changing legacy code! ",
"Glad we are on the same page in the end. ",
"I am not entirely sure yet whether or not we will support this as the fast ones don't, and my few tests appear to show that it might not be optimal"
] | 1,684 | 1,697 | 1,691 |
NONE
| null |
### System Info
transformers 4.28.1
python 3.8.13
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
- I load BertTokenizer using my own vocab.txt, and add _'[outline]'_ into _never_split_, which is included in my vocab.txt. However, _'[outline]'_ got splitted. Following is my code:
`tokenizer = BertTokenizer.from_pretrained(pretrained_path,never_split=['[outline]'])
input = "。[outline]"
print(tokenizer.tokenize(input)) # ['。', '[', 'out', '##line', ']']
`
- I also do:
`print(tokenizer.basic_tokenizer.tokenize(input)) #['。', '[', 'outline', ']']`
### Expected behavior
When I do:
`tokenizer.tokenize("。[outline]")`
Get the result as `['。', '[outline]']`, the tokens in never_split don't be splited.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23459/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23459/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23458
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23458/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23458/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23458/events
|
https://github.com/huggingface/transformers/pull/23458
| 1,716,620,510 |
PR_kwDOCUB6oc5Q2-wN
| 23,458 |
Update streamers.py
|
{
"login": "drow931",
"id": 11514434,
"node_id": "MDQ6VXNlcjExNTE0NDM0",
"avatar_url": "https://avatars.githubusercontent.com/u/11514434?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drow931",
"html_url": "https://github.com/drow931",
"followers_url": "https://api.github.com/users/drow931/followers",
"following_url": "https://api.github.com/users/drow931/following{/other_user}",
"gists_url": "https://api.github.com/users/drow931/gists{/gist_id}",
"starred_url": "https://api.github.com/users/drow931/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/drow931/subscriptions",
"organizations_url": "https://api.github.com/users/drow931/orgs",
"repos_url": "https://api.github.com/users/drow931/repos",
"events_url": "https://api.github.com/users/drow931/events{/privacy}",
"received_events_url": "https://api.github.com/users/drow931/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23458). All of your documentation changes will be reflected on that endpoint."
] | 1,684 | 1,684 | 1,684 |
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23458/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23458/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23458",
"html_url": "https://github.com/huggingface/transformers/pull/23458",
"diff_url": "https://github.com/huggingface/transformers/pull/23458.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23458.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23457
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23457/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23457/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23457/events
|
https://github.com/huggingface/transformers/pull/23457
| 1,716,224,574 |
PR_kwDOCUB6oc5Q1qKz
| 23,457 |
TF: standardize `test_model_common_attributes` for language models
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@amyeroberts now with deprecation messages (as opposed to deletions) and pytest asserts 🤗 \r\n@Rocketknight1 ping 🙌 "
] | 1,684 | 1,686 | 1,686 |
MEMBER
| null |
# What does this PR do?
`test_model_common_attributes` was overridden in most TF LMs because:
1. it was not able to handle legacy classes with LM heads (there was not a single set of autoclasses that would catch them)
2. many modern decoder-only LMs do not have a bias in the LM head
This PR adapts the test to account for these 2 cases, and removes a large number of overridden tests.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23457/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23457/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23457",
"html_url": "https://github.com/huggingface/transformers/pull/23457",
"diff_url": "https://github.com/huggingface/transformers/pull/23457.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23457.patch",
"merged_at": 1686675097000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23456
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23456/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23456/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23456/events
|
https://github.com/huggingface/transformers/pull/23456
| 1,716,135,722 |
PR_kwDOCUB6oc5Q1Wu7
| 23,456 |
TF: CTRL with native embedding layers
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@amyeroberts comments addressed 🙌 \r\n\r\nI've double-checked: slow tests are passing for CTRL"
] | 1,684 | 1,686 | 1,686 |
MEMBER
| null |
# What does this PR do?
Follows up on #23436, migrating the embedding layer of CTRL to native Keras embeddings
CTRL needed several related changes, so it deserves a stand-alone PR. This PR:
1. Replaces `TFSharedEmbeddings` by the native Keras layers
2. Fixes resized bias serialization, just like https://github.com/huggingface/transformers/pull/19013 does for BART -- in the process, gets rid of the separate LMHead class, which is outdated and tied to code scheduled for deprecation, and moves functions like `set_bias` to the right place
3. Fixes XLA issues (`prepare_inputs_for_generation` was incomplete)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23456/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23456/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23456",
"html_url": "https://github.com/huggingface/transformers/pull/23456",
"diff_url": "https://github.com/huggingface/transformers/pull/23456.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23456.patch",
"merged_at": 1686749942000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23455
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23455/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23455/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23455/events
|
https://github.com/huggingface/transformers/pull/23455
| 1,716,032,383 |
PR_kwDOCUB6oc5Q0_2X
| 23,455 |
Clean up CUDA kernels
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
In the PR adding RWKV, a new `kernels` folder was created. This PR does some additional cleanup by moving the CUDA kernels of existing models into this new folder.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23455/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23455/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23455",
"html_url": "https://github.com/huggingface/transformers/pull/23455",
"diff_url": "https://github.com/huggingface/transformers/pull/23455.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23455.patch",
"merged_at": 1684433683000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23454
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23454/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23454/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23454/events
|
https://github.com/huggingface/transformers/pull/23454
| 1,716,001,602 |
PR_kwDOCUB6oc5Q05FD
| 23,454 |
Add an option to log result from the Agent
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
This PR makes it possible to customize how results of the agent are displayed (default is using `print`) by adding a `set_stream` method.
Should address #23354
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23454/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23454/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23454",
"html_url": "https://github.com/huggingface/transformers/pull/23454",
"diff_url": "https://github.com/huggingface/transformers/pull/23454.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23454.patch",
"merged_at": 1684433210000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23453
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23453/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23453/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23453/events
|
https://github.com/huggingface/transformers/pull/23453
| 1,715,830,967 |
PR_kwDOCUB6oc5Q0T1Y
| 23,453 |
Minor awesome-transformers.md fixes
|
{
"login": "pagarsky",
"id": 36376725,
"node_id": "MDQ6VXNlcjM2Mzc2NzI1",
"avatar_url": "https://avatars.githubusercontent.com/u/36376725?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pagarsky",
"html_url": "https://github.com/pagarsky",
"followers_url": "https://api.github.com/users/pagarsky/followers",
"following_url": "https://api.github.com/users/pagarsky/following{/other_user}",
"gists_url": "https://api.github.com/users/pagarsky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pagarsky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pagarsky/subscriptions",
"organizations_url": "https://api.github.com/users/pagarsky/orgs",
"repos_url": "https://api.github.com/users/pagarsky/repos",
"events_url": "https://api.github.com/users/pagarsky/events{/privacy}",
"received_events_url": "https://api.github.com/users/pagarsky/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@LysandreJik sorry for the ping, but can you check the PR?"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes minor typos and updates link to Nebuly (renamed from nebullvm)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@LysandreJik
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23453/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23453/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23453",
"html_url": "https://github.com/huggingface/transformers/pull/23453",
"diff_url": "https://github.com/huggingface/transformers/pull/23453.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23453.patch",
"merged_at": 1684933072000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23452
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23452/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23452/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23452/events
|
https://github.com/huggingface/transformers/pull/23452
| 1,715,830,630 |
PR_kwDOCUB6oc5Q0Twg
| 23,452 |
Properly guard PyTorch stuff
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
#23438 accidentally broke main since the objects in `.generation` imported are only available when PyTorch is installed. This PR fixes that.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23452/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23452/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23452",
"html_url": "https://github.com/huggingface/transformers/pull/23452",
"diff_url": "https://github.com/huggingface/transformers/pull/23452.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23452.patch",
"merged_at": 1684426637000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23451
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23451/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23451/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23451/events
|
https://github.com/huggingface/transformers/pull/23451
| 1,715,665,314 |
PR_kwDOCUB6oc5QzvgO
| 23,451 |
Less flaky `test_assisted_decoding_matches_greedy_search`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
Less flaky `test_assisted_decoding_matches_greedy_search`: fail only if more than 1 failure among 10.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23451/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23451/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23451",
"html_url": "https://github.com/huggingface/transformers/pull/23451",
"diff_url": "https://github.com/huggingface/transformers/pull/23451.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23451.patch",
"merged_at": 1684423703000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23450
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23450/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23450/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23450/events
|
https://github.com/huggingface/transformers/pull/23450
| 1,715,545,143 |
PR_kwDOCUB6oc5QzVBV
| 23,450 |
Fix DecisionTransformerConfig doctring
|
{
"login": "joaoareis",
"id": 34096208,
"node_id": "MDQ6VXNlcjM0MDk2MjA4",
"avatar_url": "https://avatars.githubusercontent.com/u/34096208?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joaoareis",
"html_url": "https://github.com/joaoareis",
"followers_url": "https://api.github.com/users/joaoareis/followers",
"following_url": "https://api.github.com/users/joaoareis/following{/other_user}",
"gists_url": "https://api.github.com/users/joaoareis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joaoareis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joaoareis/subscriptions",
"organizations_url": "https://api.github.com/users/joaoareis/orgs",
"repos_url": "https://api.github.com/users/joaoareis/repos",
"events_url": "https://api.github.com/users/joaoareis/events{/privacy}",
"received_events_url": "https://api.github.com/users/joaoareis/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Currently, the default values for the `n_layer` and `n_head` parameters in the `DecisionTransformerConfig` are set to 3 and 1, respectively. However, the docstring says that the default value is 12 for both. This PR fixes the docstring to reflect the correct default values.
Default values:
https://github.com/huggingface/transformers/blob/f2d2880bbbd7769e12c37471af0b067b379dfc43/src/transformers/models/decision_transformer/configuration_decision_transformer.py#L120-L121
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
CC @edbeeching
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23450/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23450/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23450",
"html_url": "https://github.com/huggingface/transformers/pull/23450",
"diff_url": "https://github.com/huggingface/transformers/pull/23450.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23450.patch",
"merged_at": 1684415231000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23449
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23449/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23449/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23449/events
|
https://github.com/huggingface/transformers/issues/23449
| 1,715,506,548 |
I_kwDOCUB6oc5mQI10
| 23,449 |
Leverage Langchain Prompt Templates
|
{
"login": "surajsharan",
"id": 13249278,
"node_id": "MDQ6VXNlcjEzMjQ5Mjc4",
"avatar_url": "https://avatars.githubusercontent.com/u/13249278?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/surajsharan",
"html_url": "https://github.com/surajsharan",
"followers_url": "https://api.github.com/users/surajsharan/followers",
"following_url": "https://api.github.com/users/surajsharan/following{/other_user}",
"gists_url": "https://api.github.com/users/surajsharan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/surajsharan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/surajsharan/subscriptions",
"organizations_url": "https://api.github.com/users/surajsharan/orgs",
"repos_url": "https://api.github.com/users/surajsharan/repos",
"events_url": "https://api.github.com/users/surajsharan/events{/privacy}",
"received_events_url": "https://api.github.com/users/surajsharan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @LysandreJik @sgugger ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,687 | 1,687 |
NONE
| null |
By leveraging the Prompt Templates and the Conversation Memory we can induce ReAct ( Thought, Observation ) approach from Langchain
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23449/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23449/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23448
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23448/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23448/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23448/events
|
https://github.com/huggingface/transformers/pull/23448
| 1,715,468,948 |
PR_kwDOCUB6oc5QzEM8
| 23,448 |
Generate: increase left-padding test atol
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,686 | 1,686 |
MEMBER
| null |
# What does this PR do?
As stated in #23437, `GPTBigCode` was failing the left padding test. Upon further investigation:
1. `GPTBigCode` is left padding compatible, it accepts `position_ids` and uses them correctly
2. The test is only failing on CPU
3. The diff extremely small...
...so I've raised `atol` to `1e-7` (instead of the default `1e-8`) 👀 With `1e-7`, we can still detect failing cases, like the ones skipped in #23437
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23448/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23448/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23448",
"html_url": "https://github.com/huggingface/transformers/pull/23448",
"diff_url": "https://github.com/huggingface/transformers/pull/23448.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23448.patch",
"merged_at": 1686135417000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23447
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23447/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23447/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23447/events
|
https://github.com/huggingface/transformers/issues/23447
| 1,715,436,269 |
I_kwDOCUB6oc5mP3rt
| 23,447 |
Error while running agent for Image Question Answering
|
{
"login": "pratikkotian04",
"id": 7030183,
"node_id": "MDQ6VXNlcjcwMzAxODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7030183?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pratikkotian04",
"html_url": "https://github.com/pratikkotian04",
"followers_url": "https://api.github.com/users/pratikkotian04/followers",
"following_url": "https://api.github.com/users/pratikkotian04/following{/other_user}",
"gists_url": "https://api.github.com/users/pratikkotian04/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pratikkotian04/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pratikkotian04/subscriptions",
"organizations_url": "https://api.github.com/users/pratikkotian04/orgs",
"repos_url": "https://api.github.com/users/pratikkotian04/repos",
"events_url": "https://api.github.com/users/pratikkotian04/events{/privacy}",
"received_events_url": "https://api.github.com/users/pratikkotian04/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @pratikkotian04, thanks for raising this issue and for providing so much detail. \r\n\r\nI'm able to reproduce the issue. It seems the error is coming from our image processing library, I'll dig into it.",
"@pratikkotian04 The error is arising because the image has a depth channel i.e. is in RGBA format, and the image processors expect images to have 1 or 3 image channels. This is a know brittleness with the image processing library and a priority for me to address. \r\n\r\nA robust solution is a bit involved so won't be merge in in the next day or two. A quick solution is to convert the image to RGB format before passing to the agent: \r\n\r\n```python\r\nimport requests\r\nfrom PIL import Image\r\nfrom transformers import HfAgent\r\n\r\nimage_url = \"https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/val/png/20294671002019.png\"\r\nimage = Image.open(requests.get(image_url, stream=True).raw)\r\nimage = image.convert(\"RGB\")\r\nagent = HfAgent(\"https://api-inference.huggingface.co/models/bigcode/starcoder\")\r\nagent.run(\r\n question = \"Which Country has the highest number of child deaths ?\",\r\n image=image,\r\n task = 'image_qa'\r\n)\r\n```\r\n\r\nI was able to run the above snippet (although the response was: `The answer is england.` 👀 )\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,690 | 1,690 |
NONE
| null |
### System Info
Code :
import requests
from PIL import Image
import torch
image_url = "https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/val/png/20294671002019.png"
image = Image.open(requests.get(image_url, stream=True).raw)
#document = '/content/child_death.png'
agent.run(
question = "Which Country has the highest number of child deaths ?",
image=image,
task = 'image_qa'
)
Error:
==Explanation from the agent==
I will use the following tool: `image_qa` to answer a question about an image.
==Code generated by the agent==
answer = image_qa(image=image, question=question)
print(f"The answer is {answer}.")
==Result==
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ in <cell line: 8>:8 │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/tools/agents.py:323 in run │
│ │
│ 320 │ │ if not return_code: │
│ 321 │ │ │ print("\n\n==Result==") │
│ 322 │ │ │ self.cached_tools = resolve_tools(code, self.toolbox, remote=remote, cached_ │
│ ❱ 323 │ │ │ return evaluate(code, self.cached_tools, state=kwargs.copy()) │
│ 324 │ │ else: │
│ 325 │ │ │ tool_code = get_tool_creation_code(code, self.toolbox, remote=remote) │
│ 326 │ │ │ return f"{tool_code}\n{code}" │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:61 in evaluate │
│ │
│ 58 │ result = None │
│ 59 │ for idx, node in enumerate(expression.body): │
│ 60 │ │ try: │
│ ❱ 61 │ │ │ line_result = evaluate_ast(node, state, tools) │
│ 62 │ │ except InterpretorError as e: │
│ 63 │ │ │ msg = f"Evaluation of the code stopped at line {idx} before the end because │
│ 64 │ │ │ if chat_mode: │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:98 in │
│ evaluate_ast │
│ │
│ 95 │ if isinstance(expression, ast.Assign): │
│ 96 │ │ # Assignement -> we evaluate the assignement which should update the state │
│ 97 │ │ # We return the variable assigned as it may be used to determine the final resul │
│ ❱ 98 │ │ return evaluate_assign(expression, state, tools) │
│ 99 │ elif isinstance(expression, ast.Call): │
│ 100 │ │ # Function call -> we return the value of the function call │
│ 101 │ │ return evaluate_call(expression, state, tools) │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:139 in │
│ evaluate_assign │
│ │
│ 136 │
│ 137 def evaluate_assign(assign, state, tools): │
│ 138 │ var_names = assign.targets │
│ ❱ 139 │ result = evaluate_ast(assign.value, state, tools) │
│ 140 │ │
│ 141 │ if len(var_names) == 1: │
│ 142 │ │ state[var_names[0].id] = result │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:101 in │
│ evaluate_ast │
│ │
│ 98 │ │ return evaluate_assign(expression, state, tools) │
│ 99 │ elif isinstance(expression, ast.Call): │
│ 100 │ │ # Function call -> we return the value of the function call │
│ ❱ 101 │ │ return evaluate_call(expression, state, tools) │
│ 102 │ elif isinstance(expression, ast.Constant): │
│ 103 │ │ # Constant -> just return the value │
│ 104 │ │ return expression.value │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:167 in │
│ evaluate_call │
│ │
│ 164 │ # Todo deal with args │
│ 165 │ args = [evaluate_ast(arg, state, tools) for arg in call.args] │
│ 166 │ kwargs = {keyword.arg: evaluate_ast(keyword.value, state, tools) for keyword in call │
│ ❱ 167 │ return func(*args, **kwargs) │
│ 168 │
│ 169 │
│ 170 def evaluate_subscript(subscript, state, tools): │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/tools/base.py:534 in __call__ │
│ │
│ 531 │ │ if not self.is_initialized: │
│ 532 │ │ │ self.setup() │
│ 533 │ │ │
│ ❱ 534 │ │ encoded_inputs = self.encode(*args, **kwargs) │
│ 535 │ │ encoded_inputs = send_to_device(encoded_inputs, self.device) │
│ 536 │ │ outputs = self.forward(encoded_inputs) │
│ 537 │ │ outputs = send_to_device(outputs, "cpu") │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/tools/image_question_answering.py:49 in │
│ encode │
│ │
│ 46 │ │ super().__init__(*args, **kwargs) │
│ 47 │ │
│ 48 │ def encode(self, image: "Image", question: str): │
│ ❱ 49 │ │ return self.pre_processor(image, question, return_tensors="pt") │
│ 50 │ │
│ 51 │ def forward(self, inputs): │
│ 52 │ │ with torch.no_grad(): │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/models/vilt/processing_vilt.py:107 in │
│ __call__ │
│ │
│ 104 │ │ │ **kwargs, │
│ 105 │ │ ) │
│ 106 │ │ # add pixel_values + pixel_mask │
│ ❱ 107 │ │ encoding_image_processor = self.image_processor(images, return_tensors=return_te │
│ 108 │ │ encoding.update(encoding_image_processor) │
│ 109 │ │ │
│ 110 │ │ return encoding │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/image_processing_utils.py:464 in __call__ │
│ │
│ 461 │ │
│ 462 │ def __call__(self, images, **kwargs) -> BatchFeature: │
│ 463 │ │ """Preprocess an image or a batch of images.""" │
│ ❱ 464 │ │ return self.preprocess(images, **kwargs) │
│ 465 │ │
│ 466 │ def preprocess(self, images, **kwargs) -> BatchFeature: │
│ 467 │ │ raise NotImplementedError("Each image processor must implement its own preproces │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/models/vilt/image_processing_vilt.py:462 in │
│ preprocess │
│ │
│ 459 │ │ images = [to_numpy_array(image) for image in images] │
│ 460 │ │ │
│ 461 │ │ if do_resize: │
│ ❱ 462 │ │ │ images = [ │
│ 463 │ │ │ │ self.resize(image=image, size=size, size_divisor=size_divisor, resample= │
│ 464 │ │ │ ] │
│ 465 │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/models/vilt/image_processing_vilt.py:463 in │
│ <listcomp> │
│ │
│ 460 │ │ │
│ 461 │ │ if do_resize: │
│ 462 │ │ │ images = [ │
│ ❱ 463 │ │ │ │ self.resize(image=image, size=size, size_divisor=size_divisor, resample= │
│ 464 │ │ │ ] │
│ 465 │ │ │
│ 466 │ │ if do_rescale: │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/models/vilt/image_processing_vilt.py:230 in │
│ resize │
│ │
│ 227 │ │ │ raise ValueError(f"The `size` dictionary must contain the key `shortest_edge │
│ 228 │ │ shorter = size["shortest_edge"] │
│ 229 │ │ longer = int(1333 / 800 * shorter) │
│ ❱ 230 │ │ output_size = get_resize_output_image_size(image, shorter=shorter, longer=longer │
│ 231 │ │ return resize(image, size=output_size, resample=resample, data_format=data_forma │
│ 232 │ │
│ 233 │ def rescale( │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/models/vilt/image_processing_vilt.py:87 in │
│ get_resize_output_image_size │
│ │
│ 84 def get_resize_output_image_size( │
│ 85 │ input_image: np.ndarray, shorter: int = 800, longer: int = 1333, size_divisor: int = │
│ 86 ) -> Tuple[int, int]: │
│ ❱ 87 │ input_height, input_width = get_image_size(input_image) │
│ 88 │ min_size, max_size = shorter, longer │
│ 89 │ │
│ 90 │ scale = min_size / min(input_height, input_width) │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/image_utils.py:205 in get_image_size │
│ │
│ 202 │ │ A tuple of the image's height and width. │
│ 203 │ """ │
│ 204 │ if channel_dim is None: │
│ ❱ 205 │ │ channel_dim = infer_channel_dimension_format(image) │
│ 206 │ │
│ 207 │ if channel_dim == ChannelDimension.FIRST: │
│ 208 │ │ return image.shape[-2], image.shape[-1] │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/image_utils.py:169 in │
│ infer_channel_dimension_format │
│ │
│ 166 │ │ return ChannelDimension.FIRST │
│ 167 │ elif image.shape[last_dim] in (1, 3): │
│ 168 │ │ return ChannelDimension.LAST │
│ ❱ 169 │ raise ValueError("Unable to infer channel dimension format") │
│ 170 │
│ 171 │
│ 172 def get_channel_dimension_axis(image: np.ndarray) -> int: │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: Unable to infer channel dimension format
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Code :
import requests
from PIL import Image
import torch
image_url = "https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/val/png/20294671002019.png"
image = Image.open(requests.get(image_url, stream=True).raw)
#document = '/content/child_death.png'
agent.run(
question = "Which Country has the highest number of child deaths ?",
image=image,
task = 'image_qa'
)
### Expected behavior
Answer from the graph
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23447/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23447/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23446
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23446/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23446/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23446/events
|
https://github.com/huggingface/transformers/pull/23446
| 1,715,315,825 |
PR_kwDOCUB6oc5QyikQ
| 23,446 |
Update tiny models and pipeline tests
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23446). All of your documentation changes will be reflected on that endpoint."
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
Update tiny models and pipeline tests.
The two failing pipeline tests for `rwkv` is addressed in #23442 and #23444 (and need to merge them before this one)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23446/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23446/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23446",
"html_url": "https://github.com/huggingface/transformers/pull/23446",
"diff_url": "https://github.com/huggingface/transformers/pull/23446.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23446.patch",
"merged_at": 1684423745000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23445
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23445/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23445/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23445/events
|
https://github.com/huggingface/transformers/pull/23445
| 1,715,253,743 |
PR_kwDOCUB6oc5QyVE8
| 23,445 |
Allow dict input for audio classification pipeline
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Apologies @sgugger! To clarify, the changes in this PR are one-for-one copied from the input arguments in https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/automatic_speech_recognition.py\r\n\r\nEssentially, the PR allows users to input a dictionary of inputs to the pipeline. This aligns the pipeline with `datasets`, where the `audio` column returns a dict with `array` (the 1-d audio array) and `sampling_rate` (the sampling rate of the audio):\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nlibrispeech = load_dataset(\"hf-internal-testing/librispeech_asr_dummy\", \"clean\", split=\"validation\")\r\nlibrispeech[0][\"audio\"]\r\n```\r\n**Output:**\r\n```\r\n{'path': '/Users/sanchitgandhi/.cache/huggingface/datasets/downloads/extracted/aad76e6f21870761d7a8b9b34436f6f8db846546c68cb2d9388598d7a164fa4b/dev_clean/1272/128104/1272-128104-0000.flac',\r\n 'array': array([0.00238037, 0.0020752 , 0.00198364, ..., 0.00042725, 0.00057983,\r\n 0.0010376 ]),\r\n 'sampling_rate': 16000}\r\n```\r\n(the `path` column is deprecated an no longer required, but retained for backwards compatibility. This is what removing `path` refers to in the PR)\r\n\r\nThis PR enables the dict to be passed directly to the pipeline, in the same way that we do for the ASR pipeline and the `transformers` feature extractors:\r\n```python\r\npred_labels = pipe(librispeech[0][\"audio\"])\r\n```\r\n\r\nIf there are any API decisions you feel require changing, I'd be happy to update these in the original code before propagating to this file.",
"I think what you're trying to do is already supported, but the sampling rate needs to be in the same dict as the array (both are needed to represent a single audio).\r\n\r\nThat being said, the errors raised when misusing this feature could probably be largely improved (to guide users towards the correct form)."
] | 1,684 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
# What does this PR do?
Allow dictionary inputs for the audio classification pipeline
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23445/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23445/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23445",
"html_url": "https://github.com/huggingface/transformers/pull/23445",
"diff_url": "https://github.com/huggingface/transformers/pull/23445.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23445.patch",
"merged_at": 1687524638000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23444
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23444/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23444/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23444/events
|
https://github.com/huggingface/transformers/pull/23444
| 1,715,237,544 |
PR_kwDOCUB6oc5QyRlH
| 23,444 |
Fix (skip) a pipeline test for `RwkvModel`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
Fix (skip) a pipeline test for `RwkvModel`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23444/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23444/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23444",
"html_url": "https://github.com/huggingface/transformers/pull/23444",
"diff_url": "https://github.com/huggingface/transformers/pull/23444.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23444.patch",
"merged_at": 1684414463000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23443
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23443/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23443/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23443/events
|
https://github.com/huggingface/transformers/pull/23443
| 1,715,204,765 |
PR_kwDOCUB6oc5QyKgj
| 23,443 |
fix: load_best_model_at_end error when load_in_8bit is True
|
{
"login": "dkqkxx",
"id": 32215330,
"node_id": "MDQ6VXNlcjMyMjE1MzMw",
"avatar_url": "https://avatars.githubusercontent.com/u/32215330?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dkqkxx",
"html_url": "https://github.com/dkqkxx",
"followers_url": "https://api.github.com/users/dkqkxx/followers",
"following_url": "https://api.github.com/users/dkqkxx/following{/other_user}",
"gists_url": "https://api.github.com/users/dkqkxx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dkqkxx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dkqkxx/subscriptions",
"organizations_url": "https://api.github.com/users/dkqkxx/orgs",
"repos_url": "https://api.github.com/users/dkqkxx/repos",
"events_url": "https://api.github.com/users/dkqkxx/events{/privacy}",
"received_events_url": "https://api.github.com/users/dkqkxx/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @sgugger @younesbelkada ",
"> \r\n\r\nusing load_adapter seems to be a smarter solution, I'll try it.",
"Awesome, let us know how it goes!",
"is this is addressing where the bug came from... looking at an older working version https://github.com/huggingface/transformers/blob/d941f07a4e3bc7b61b7afbd25d6e2e8427fccc6d/src/transformers/trainer.py#L2170r version, the code being edited is functionally the same... I think we need to test more to see which repo the issue is from, PEFT, Transformers or bitsandbytes, and find what introduced it. Unless someone knows the change which would have caused this? ",
"according to https://github.com/huggingface/peft/issues/286#issuecomment-1512611968\r\nthe correct way to save the intermediate checkpoints for PEFT when using Trainer would be to use Callbacks. \r\n\r\nso we can assume that adapter have been saved properly and load_adapter from self.state.best_model_checkpoint\r\n"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
Ref: https://github.com/huggingface/peft/issues/394
Loading a quantized checkpoint into non-quantized Linear8bitLt is not supported.
call module.cuda() before module.load_state_dict()
# What does this PR do?
fix: load_best_model_at_end error when load_in_8bit is True
Ref: https://github.com/huggingface/peft/issues/394
Loading a quantized checkpoint into non-quantized Linear8bitLt is not supported.
call module.cuda() before module.load_state_dict()
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23443/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23443/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23443",
"html_url": "https://github.com/huggingface/transformers/pull/23443",
"diff_url": "https://github.com/huggingface/transformers/pull/23443.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23443.patch",
"merged_at": 1684867828000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23442
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23442/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23442/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23442/events
|
https://github.com/huggingface/transformers/pull/23442
| 1,715,187,858 |
PR_kwDOCUB6oc5QyG3t
| 23,442 |
Make `RwkvModel` accept `attention_mask` but discard it internally
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
`RwkvModel` doesn't accept `attention_mask` but the tokenizer (it is `GPTNeoXTokenizer` from the checkpoints) prepares this input. When I try to run the pipeline tests for this model, I get
```bash
TypeError: forward() got an unexpected keyword argument 'attention_mask'
```
I see it would be quite annoying for people using this model with the default tokenizer. So it might be good to accept it then discard it internally with a warning.
(If `RwkvModel` had its own tokenizer class `RwkvTokenizer`, we could do this inside tokenizer class. But here is not the case)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23442/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23442/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23442",
"html_url": "https://github.com/huggingface/transformers/pull/23442",
"diff_url": "https://github.com/huggingface/transformers/pull/23442.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23442.patch",
"merged_at": 1684422865000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23441
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23441/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23441/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23441/events
|
https://github.com/huggingface/transformers/issues/23441
| 1,715,176,737 |
I_kwDOCUB6oc5mO4Uh
| 23,441 |
Same capabilities for exporting using torchscript as with ONNX
|
{
"login": "darwinharianto",
"id": 44696192,
"node_id": "MDQ6VXNlcjQ0Njk2MTky",
"avatar_url": "https://avatars.githubusercontent.com/u/44696192?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/darwinharianto",
"html_url": "https://github.com/darwinharianto",
"followers_url": "https://api.github.com/users/darwinharianto/followers",
"following_url": "https://api.github.com/users/darwinharianto/following{/other_user}",
"gists_url": "https://api.github.com/users/darwinharianto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/darwinharianto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/darwinharianto/subscriptions",
"organizations_url": "https://api.github.com/users/darwinharianto/orgs",
"repos_url": "https://api.github.com/users/darwinharianto/repos",
"events_url": "https://api.github.com/users/darwinharianto/events{/privacy}",
"received_events_url": "https://api.github.com/users/darwinharianto/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@darwinharianto There's a guide on exporting to torchscript in the docs: https://huggingface.co/docs/transformers/torchscript\r\n\r\nLet us know if there's any information missing there. ",
"I can safely export the model to torchscript using this code, but I get this `TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect.`. I assume I can ignore this for now(?).\r\n\r\nthis is the code to export the model\r\n```\r\nimport torch\r\nimport numpy as np\r\nfrom transformers import AutoModel, AutoProcessor, OwlViTModel, OwlViTProcessor\r\nfrom PIL import Image\r\nimport requests\r\n\r\nclass MyOpenDetector(torch.nn.Module):\r\n def __init__(self, model=None):\r\n super(MyOpenDetector, self).__init__()\r\n self.model = model\r\n \r\n def forward(self, input_ids, pixel_values, attention_mask):\r\n # inputs = {\"input_ids\":x[0], \"attention_mask\":x[1], \"pixel_values\":x[2]}\r\n outputs = self.model(input_ids=input_ids, pixel_values=pixel_values, attention_mask=attention_mask)\r\n # print(type(outputs))\r\n logits_per_image = outputs[0] # this is the image-text similarity score\r\n probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities\r\n\r\n return probs\r\ndef save_owlvitmodel(inputs, modelname):\r\n\r\n openModel = AutoModel.from_pretrained(modelname, torchscript=True).eval()\r\n\r\n x = tuple([inputs['input_ids'], inputs['pixel_values'], inputs['attention_mask']])\r\n model = MyOpenDetector(model=openModel)\r\n traced_model = torch.jit.trace(model, x)\r\n torch.jit.save(traced_model, 'traced_owlvit.pt')\r\n\r\n return\r\n\r\nmodelname = \"google/owlvit-base-patch32\"\r\nprocessor = AutoProcessor.from_pretrained(modelname, torchscript=True)\r\nurl = \"http://images.cocodataset.org/val2017/000000039769.jpg\"\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\ninputs = processor(text=[[\"a photo of a cat\", \"a photo of a dog\"]], images=torch.Tensor(np.asarray(image)), return_tensors=\"pt\")\r\n\r\nsave_owlvitmodel(inputs, modelname)\r\n\r\nloaded_model = torch.jit.load(\"traced_owlvit.pt\")\r\nloaded_model.eval()\r\n\r\nx = tuple([inputs['input_ids'], inputs['pixel_values'], inputs['attention_mask']])\r\nprobs = loaded_model(*x)\r\nprint(probs)\r\n```\r\n\r\n\r\nhow can I do the same for processor?\r\n```\r\nimport torch\r\nimport numpy as np\r\nfrom transformers import AutoModel, AutoProcessor, OwlViTModel, OwlViTProcessor\r\nfrom PIL import Image\r\nimport requests\r\nclass MyOpenProcessor(torch.nn.Module):\r\n\r\n def __init__(self, processor=None):\r\n super(MyOpenProcessor, self).__init__()\r\n self.processor = processor\r\n \r\n def forward(self, text, images):\r\n outputs = self.processor(text=text, images=images, return_tensors=\"pt\")\r\n return outputs\r\n\r\nmodelname = \"google/owlvit-base-patch32\"\r\nprocessor = AutoProcessor.from_pretrained(modelname, torchscript=True)\r\nurl = \"http://images.cocodataset.org/val2017/000000039769.jpg\"\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\ninputs = processor(text=[[\"a photo of a cat\", \"a photo of a dog\"]], images=torch.Tensor(np.asarray(image)), return_tensors=\"pt\")\r\nx = tuple([[[\"a photo of a cat\", \"a photo of a dog\"]], torch.Tensor(np.asarray(image))])\r\nnewProcessor = MyOpenProcessor(processor=processor)\r\ntraced_processor = torch.jit.trace(newProcessor, x)\r\n```\r\n\r\nthis throws an error because there is this tuple of list list str"
] | 1,684 | 1,684 | 1,684 |
NONE
| null |
### Feature request
There are two supported ways to serialize the model which is to use ONNX or to use torchscript. Looking at the documentation, in [ONNX](https://huggingface.co/docs/transformers/main/serialization), ONNX exporter seemed to save both processor and model to a file, while torchscript exporter just export the model itself.
How do I do the same for the torchscript exporter?
Using ONNX
```
from pathlib import Path
from transformers.onnx import export
from transformers import AutoProcessor, AutoModel
from transformers import AutoConfig
from transformers.models.owlvit.configuration_owlvit import OwlViTOnnxConfig
onnx_path = Path("model.onnx")
model_ckpt = "google/owlvit-base-patch32"
config = AutoConfig.from_pretrained(model_ckpt)
onnx_config = OwlViTOnnxConfig(config)
base_model = AutoModel.from_pretrained(model_ckpt)
processor = AutoProcessor.from_pretrained(model_ckpt)
onnx_inputs, onnx_outputs = export(processor, base_model, onnx_config, onnx_config.default_onnx_opset, onnx_path)
```
What is the equivalent thing in torchscript?
```
from pathlib import Path
from transformers import AutoProcessor, AutoModel
base_model = AutoModel.from_pretrained(model_ckpt, torchscript=True)
processor = AutoProcessor.from_pretrained(model_ckpt, torchscript=True)
# what is this two line should be?
traced_model = torch.jit.trace(base_model, ??) #
torch.jit.save(traced_model, "traced_bert.pt")
```
### Motivation
Being able to use torchscript on processor
### Your contribution
Anything that I can do
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23441/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23441/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23440
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23440/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23440/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23440/events
|
https://github.com/huggingface/transformers/pull/23440
| 1,714,715,010 |
PR_kwDOCUB6oc5QwiI7
| 23,440 |
add cleanlab to awesome-transformers tools list
|
{
"login": "jwmueller",
"id": 1390638,
"node_id": "MDQ6VXNlcjEzOTA2Mzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1390638?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jwmueller",
"html_url": "https://github.com/jwmueller",
"followers_url": "https://api.github.com/users/jwmueller/followers",
"following_url": "https://api.github.com/users/jwmueller/following{/other_user}",
"gists_url": "https://api.github.com/users/jwmueller/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jwmueller/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jwmueller/subscriptions",
"organizations_url": "https://api.github.com/users/jwmueller/orgs",
"repos_url": "https://api.github.com/users/jwmueller/repos",
"events_url": "https://api.github.com/users/jwmueller/events{/privacy}",
"received_events_url": "https://api.github.com/users/jwmueller/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Also ccing @LysandreJik who compiled the list :)",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
This PR adds the cleanlab package to the awesome-transformers list.
cleanlab uses **transformers** & Hugging Face in all sorts of ways. Here's a couple of them:
- [tutorial for text data with **transformers**](https://docs.cleanlab.ai/stable/tutorials/datalab/text.html)
- [active learning with **transformers**](https://github.com/cleanlab/examples/blob/master/active_learning_transformers/active_learning.ipynb)
- [auditing data in the **datasets** format](https://github.com/cleanlab/examples/blob/master/datalab_image_classification/datalab.ipynb), which is now the [primary data format](https://docs.cleanlab.ai/stable/cleanlab/datalab/datalab.html) supported by cleanlab
- [how to wrap a **transformers** model to be sklearn compatible](https://github.com/cleanlab/examples/blob/master/transformer_sklearn/transformer_sklearn.ipynb)
I'm not sure which reviewer is most appropriate, but perhaps one of the listed documentation reviewers:
@sgugger, @stevhliu or @MKhalusova
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23440/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23440/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23440",
"html_url": "https://github.com/huggingface/transformers/pull/23440",
"diff_url": "https://github.com/huggingface/transformers/pull/23440.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23440.patch",
"merged_at": 1684430068000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23439
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23439/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23439/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23439/events
|
https://github.com/huggingface/transformers/pull/23439
| 1,714,711,781 |
PR_kwDOCUB6oc5QwhcX
| 23,439 |
Add FastSpeech2Conformer
|
{
"login": "connor-henderson",
"id": 78612354,
"node_id": "MDQ6VXNlcjc4NjEyMzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/78612354?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/connor-henderson",
"html_url": "https://github.com/connor-henderson",
"followers_url": "https://api.github.com/users/connor-henderson/followers",
"following_url": "https://api.github.com/users/connor-henderson/following{/other_user}",
"gists_url": "https://api.github.com/users/connor-henderson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/connor-henderson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/connor-henderson/subscriptions",
"organizations_url": "https://api.github.com/users/connor-henderson/orgs",
"repos_url": "https://api.github.com/users/connor-henderson/repos",
"events_url": "https://api.github.com/users/connor-henderson/events{/privacy}",
"received_events_url": "https://api.github.com/users/connor-henderson/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23439). All of your documentation changes will be reflected on that endpoint.",
"Thanks for the review @hollance! Addressed the comments above, the only part that might need follow-up discussion is making the `labels` compatible with the `Trainer`\r\n>Re labels, FastSpeech2 is somewhat unique in that it takes in many labels (spectrograms, pitch, energy, and duration) for training. I'm not sure exactly what this means for compatibility with Trainer since I haven't had time to do a deeper dive, but for now I've changed the \"targets\" to include _labels in their name, left the training test as skipped, and planning to look into it more when I do the demo notebook.",
"Appreciate the comments @hollance, @ArthurZucker @sanchit-gandhi this should be ready for review for you now",
"Thank you for the review @sanchit-gandhi, comments should be addressed now. \r\n\r\nCentralizing a note on passing the config instead of args here since there were a few comments on that - the other modules mentioned are all instantiated twice with different arg values so they can’t solely be passed the config. Lmk if you think there’s something I missed or if you’d prefer something else like duplicating the modules in order to pass just the config.",
"Reviewing now! ",
"Thanks @ArthurZucker for the review, your comments should be addressed now",
"Looking very clean, thanks for iterating @connor-henderson! Would be great to get a look from @ylacombe to check this model will be compatible with the new TTS pipeline (e.g. with the auto mapping names), alongside @ArthurZucker for the final review",
"I'll review this today as well 😉 ",
"@ArthurZucker comments are addressed, left two follow up questions: first one on if changes requested on `# Copied from …` code in speecht5 hifigan should in fact be made, second one regarding self.training vs is_inference and checking for labels",
"Answered in-line! Thanks for iterating here @connor-henderson!",
"Thanks for clarifying @sanchit-gandhi, addressed that. I believe the only lingering question is the one from this comment https://github.com/huggingface/transformers/pull/23439#discussion_r1302119798 around whether I should remove the tokenizer from the docstring examples to pass the doc tests since it requires g2p_en backend",
"Hey! You can also add `g2p` to the list of package needed to run the PR documentation tests in the CI job! Do you need help with that? ",
"See @ydshieh's comment for reference: https://github.com/huggingface/transformers/pull/23439#discussion_r1333918558",
"> Hey! You can also add `g2p` to the list of package needed to run the PR documentation tests in the CI job! Do you need help with that?\r\n\r\n> See @ydshieh's comment for reference: https://github.com/huggingface/transformers/pull/23439#discussion_r1333918558\r\n\r\n@sanchit-gandhi @ArthurZucker all good thanks was on vacation, just added",
"bump!",
"I'll help to get this merge @connor-henderson as seen offline! I gotta finish catching up and should come back to this tomorrow ! Sorry again 🤗 ",
"Here is [how I dealt with the `CLIPVisionModel` ](https://github.com/huggingface/transformers/pull/27662#discussion_r1415447950). In the PR I added the config and model to the config mappings and also the correct places to import, but not in the readmes or in the index file. ",
"It's been a long ride, but merging now!\r\n\r\nThanks again for the great work and your patience !"
] | 1,684 | 1,704 | 1,704 |
CONTRIBUTOR
| null |
# What does this PR do?
Adds the TTS (text-to-speech) conformer version of the FastSpeech2 model. Closest related issue is [#15166](https://github.com/huggingface/transformers/issues/15166) though this implements ESPnet's conformer version instead of Fairseq's version as suggested in https://github.com/huggingface/transformers/pull/15773#issuecomment-1529558164.
[FastSpeech2 paper (Microsoft)](https://arxiv.org/pdf/2006.04558.pdf)
[Conformer version paper (ESPnet)](https://arxiv.org/pdf/2010.13956.pdf)
Conformer version code implementation: https://github.com/espnet/espnet/tree/master/espnet2/tts/fastspeech2
Additional conformer version code code implementation: https://github.com/DigitalPhonetics/IMS-Toucan/blob/ToucanTTS/TrainingInterfaces/Text_to_Spectrogram/FastSpeech2
The paper abstracts say most of this, but the main points of what makes this model an interesting addition are:
- It's non auto-regressive, leading to faster inference since it doesn't have to make predictions sequentially (hence the name `FastSpeech`)
- Uses a variance predictor in between the encoder and decoder to explicitly predict duration, pitch, and energy, leading to more accurate results
- Conformer architectures have been shown to improve performance in text to speech tasks, with the convolutions learning close range speech patterns and transformer attention helping with understanding longer range contexts
- There is currently only one other text-to-speech model in `transformers` (`SpeechT5ForTextToSpeech`)
## To do
- [x] Prepared 🤗 Transformers dev environment
- [x] Set up debugging environment of the original repository
- [x] Created script that successfully runs the forward() pass using the original repository and checkpoint
- [x] Successfully added the model skeleton to 🤗 Transformers (+ vocoder)
- [x] Successfully converted original checkpoint to 🤗 Transformers checkpoint (+ vocoder)
- [x] Successfully ran forward() pass in 🤗 Transformers that gives identical output to original checkpoint (+ vocoder)
- [x] Finished model tests in 🤗 Transformers
- [x] Successfully added tokenizer in 🤗 Transformers
- [x] Run end-to-end integration tests
- [x] Finished docs
- [x] Uploaded model weights to the Hub (will ask they're moved to just `fastspeech2_conformer` when ready)
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@hollance @sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23439/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23439/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23439",
"html_url": "https://github.com/huggingface/transformers/pull/23439",
"diff_url": "https://github.com/huggingface/transformers/pull/23439.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23439.patch",
"merged_at": 1704304866000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23438
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23438/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23438/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23438/events
|
https://github.com/huggingface/transformers/pull/23438
| 1,714,608,596 |
PR_kwDOCUB6oc5QwLbi
| 23,438 |
Add local agent
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
This PR adds support for a local agent running the model to generate code on the user's machine.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23438/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23438/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23438",
"html_url": "https://github.com/huggingface/transformers/pull/23438",
"diff_url": "https://github.com/huggingface/transformers/pull/23438.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23438.patch",
"merged_at": 1684422596000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23437
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23437/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23437/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23437/events
|
https://github.com/huggingface/transformers/pull/23437
| 1,714,441,741 |
PR_kwDOCUB6oc5Qvmma
| 23,437 |
Generate: skip left-padding tests on old models
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"It seems like `GPTBigCode` need to be fixed! Adding back the `@slow` to defer back fixing",
"@gante it all passes now 😄 \r\n"
] | 1,684 | 1,684 | 1,684 |
MEMBER
| null |
# What does this PR do?
We have a few left-padding tests polluting our daily CI -- these are from old models, where it is not worth committing >1hr per model to add support for left padding.
This PR skips the test in those models, so we can focus our energy where it matters :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23437/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23437/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23437",
"html_url": "https://github.com/huggingface/transformers/pull/23437",
"diff_url": "https://github.com/huggingface/transformers/pull/23437.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23437.patch",
"merged_at": 1684404292000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23436
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23436/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23436/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23436/events
|
https://github.com/huggingface/transformers/pull/23436
| 1,714,382,194 |
PR_kwDOCUB6oc5QvZFh
| 23,436 |
TF: GPT2 with native embedding layers
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
MEMBER
| null |
# What does this PR do?
This PR continues the (paused) goal of deprecating our custom TF embedding layers and related code. Previously, we have converted encoder-decoder models (e.g. [here](https://github.com/huggingface/transformers/pull/19263)), removing `TFSharedEmbeddings` there and making the necessary adaptations.
In this PR, I make the necessary adaptations for GPT2. The goal is for you, the reviewers, to raise objections in this PR :D All slow tests for TF GPT2 are passing.
Then, the following sequence of PRs will be opened:
1. Remove `TFSharedEmbeddings` from the other decoder-only models
2. Remove other uses of `TFSharedEmbeddings` in the codebase (e.g. in tests)
3. Remove `resize_token_embeddings` and all related functions (it is only used to resize our models' embeddings instantiated with `TFSharedEmbeddings`)
4. Remove the slow decorator from `test_save_load_after_resize_token_embeddings`, which will be fixed as a consequence of these changes 🙌
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23436/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23436/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23436",
"html_url": "https://github.com/huggingface/transformers/pull/23436",
"diff_url": "https://github.com/huggingface/transformers/pull/23436.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23436.patch",
"merged_at": 1684417600000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23435
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23435/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23435/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23435/events
|
https://github.com/huggingface/transformers/pull/23435
| 1,714,312,494 |
PR_kwDOCUB6oc5QvJyf
| 23,435 |
Fix device issue in `SwiftFormerModelIntegrationTest::test_inference_image_classification_head`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
The title says everything 😄
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23435/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23435/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23435",
"html_url": "https://github.com/huggingface/transformers/pull/23435",
"diff_url": "https://github.com/huggingface/transformers/pull/23435.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23435.patch",
"merged_at": 1684345699000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23434
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23434/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23434/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23434/events
|
https://github.com/huggingface/transformers/pull/23434
| 1,714,274,875 |
PR_kwDOCUB6oc5QvBse
| 23,434 |
Export to ONNX doc refocused on using optimum, added tflite
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
Currently, despite the notes saying that the recommended way is to use Optimum, the main focus of the doc is on using `transformers.onnx`, which is no longer maintained (according to optimum team).
This PR restructures the doc to put the optimum examples forward as the primary way for exporting models to ONNX.
The example of using `transformers.onnx` is kept for potential compatibility reasons, however, I have removed the examples of adding new architectures to transformers.onnx, as I believe this should be done in Optimum instead (links provided).
As suggested by the team, I have also added an example for TFLite, and for that reason, renamed the page to "Export to ONNX, TFLite"
UPD: since we now link to Optimum docs for the list of supported architectures, I have also removed the ONNX list check & update from the check_table script.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23434/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23434/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23434",
"html_url": "https://github.com/huggingface/transformers/pull/23434",
"diff_url": "https://github.com/huggingface/transformers/pull/23434.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23434.patch",
"merged_at": 1684930404000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23433
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23433/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23433/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23433/events
|
https://github.com/huggingface/transformers/pull/23433
| 1,714,271,782 |
PR_kwDOCUB6oc5QvBBz
| 23,433 |
remove unnecessary print in gpt neox sequence classifier
|
{
"login": "cfhammill",
"id": 7467038,
"node_id": "MDQ6VXNlcjc0NjcwMzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7467038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cfhammill",
"html_url": "https://github.com/cfhammill",
"followers_url": "https://api.github.com/users/cfhammill/followers",
"following_url": "https://api.github.com/users/cfhammill/following{/other_user}",
"gists_url": "https://api.github.com/users/cfhammill/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cfhammill/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cfhammill/subscriptions",
"organizations_url": "https://api.github.com/users/cfhammill/orgs",
"repos_url": "https://api.github.com/users/cfhammill/repos",
"events_url": "https://api.github.com/users/cfhammill/events{/privacy}",
"received_events_url": "https://api.github.com/users/cfhammill/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
Removes an unnecessary print from gpt neox's sequence classifier output which muffles unnecessary output that was likely used for debugging at one point. Before, training gpt neox as a sequence classifier would `print` the logit/label sizes every training step which is hard to muffle and generally not useful.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). **NA, original behaviour not documented**
- [ ] Did you write any new necessary tests? **NA, change too simple to require test**
## Who can review?
- text models: @ArthurZucker and @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23433/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23433/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23433",
"html_url": "https://github.com/huggingface/transformers/pull/23433",
"diff_url": "https://github.com/huggingface/transformers/pull/23433.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23433.patch",
"merged_at": 1684406074000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23432
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23432/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23432/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23432/events
|
https://github.com/huggingface/transformers/pull/23432
| 1,714,257,339 |
PR_kwDOCUB6oc5Qu96n
| 23,432 |
Remove hardcoded prints in Trainer
|
{
"login": "hugoabonizio",
"id": 1206395,
"node_id": "MDQ6VXNlcjEyMDYzOTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1206395?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hugoabonizio",
"html_url": "https://github.com/hugoabonizio",
"followers_url": "https://api.github.com/users/hugoabonizio/followers",
"following_url": "https://api.github.com/users/hugoabonizio/following{/other_user}",
"gists_url": "https://api.github.com/users/hugoabonizio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hugoabonizio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hugoabonizio/subscriptions",
"organizations_url": "https://api.github.com/users/hugoabonizio/orgs",
"repos_url": "https://api.github.com/users/hugoabonizio/repos",
"events_url": "https://api.github.com/users/hugoabonizio/events{/privacy}",
"received_events_url": "https://api.github.com/users/hugoabonizio/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
There are some print calls in the Trainer when using 8-bit optimizers that might be annoying when they can't be disabled. This PR replaces them with `logger.info` calls, but please let me know if there's a better way of handling them.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23432/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23432/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23432",
"html_url": "https://github.com/huggingface/transformers/pull/23432",
"diff_url": "https://github.com/huggingface/transformers/pull/23432.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23432.patch",
"merged_at": 1684343292000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23431
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23431/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23431/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23431/events
|
https://github.com/huggingface/transformers/pull/23431
| 1,714,207,746 |
PR_kwDOCUB6oc5QuzCs
| 23,431 |
Update Bigbird Pegasus tests
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
Need to updated some values in the tests after #23056 (which itself updated some values too).
(Otherwise tests fail at this moment).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23431/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23431/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23431",
"html_url": "https://github.com/huggingface/transformers/pull/23431",
"diff_url": "https://github.com/huggingface/transformers/pull/23431.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23431.patch",
"merged_at": 1684340070000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23430
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23430/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23430/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23430/events
|
https://github.com/huggingface/transformers/pull/23430
| 1,714,165,338 |
PR_kwDOCUB6oc5Qupqv
| 23,430 |
🌐 [i18n-KO] Translated `tasks/zero_shot_object_detection.mdx` to Korean
|
{
"login": "HanNayeoniee",
"id": 33839093,
"node_id": "MDQ6VXNlcjMzODM5MDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/33839093?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HanNayeoniee",
"html_url": "https://github.com/HanNayeoniee",
"followers_url": "https://api.github.com/users/HanNayeoniee/followers",
"following_url": "https://api.github.com/users/HanNayeoniee/following{/other_user}",
"gists_url": "https://api.github.com/users/HanNayeoniee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HanNayeoniee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HanNayeoniee/subscriptions",
"organizations_url": "https://api.github.com/users/HanNayeoniee/orgs",
"repos_url": "https://api.github.com/users/HanNayeoniee/repos",
"events_url": "https://api.github.com/users/HanNayeoniee/events{/privacy}",
"received_events_url": "https://api.github.com/users/HanNayeoniee/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
<!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으로 부탁드립니다 -->
# What does this PR do?
Translated the `tasks/zero_shot_object_detection.mdx` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
<!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
<!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
@sgugger, @ArthurZucker, @eunseojo May you please review this PR?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23430/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23430/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23430",
"html_url": "https://github.com/huggingface/transformers/pull/23430",
"diff_url": "https://github.com/huggingface/transformers/pull/23430.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23430.patch",
"merged_at": 1684414337000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23429
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23429/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23429/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23429/events
|
https://github.com/huggingface/transformers/pull/23429
| 1,714,130,652 |
PR_kwDOCUB6oc5QuiBk
| 23,429 |
fix bug in group_texts function, that was inserting short batches
|
{
"login": "BodaSadalla98",
"id": 32247544,
"node_id": "MDQ6VXNlcjMyMjQ3NTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/32247544?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BodaSadalla98",
"html_url": "https://github.com/BodaSadalla98",
"followers_url": "https://api.github.com/users/BodaSadalla98/followers",
"following_url": "https://api.github.com/users/BodaSadalla98/following{/other_user}",
"gists_url": "https://api.github.com/users/BodaSadalla98/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BodaSadalla98/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BodaSadalla98/subscriptions",
"organizations_url": "https://api.github.com/users/BodaSadalla98/orgs",
"repos_url": "https://api.github.com/users/BodaSadalla98/repos",
"events_url": "https://api.github.com/users/BodaSadalla98/events{/privacy}",
"received_events_url": "https://api.github.com/users/BodaSadalla98/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"You are right, my bad!\r\n\r\nWhat about if we made it like this:\r\n```python \r\n k: [t[i : i + block_size] for i in range(0, total_length - block_size + 1, block_size)]\r\n```\r\nthis way, we will not drop an entire batch when `total_length >= block_size` , and at the same time we will return an empty batch, in the case where `total_length < block_size`.\r\n\r\nthe issue with the code now is that it allows for some samples to have `sequence_length < block_size`, which throws an error when the `data_collator` tries to convert the batches into tensors. \r\n\r\n`batch[k] = torch.tensor([f[k] for f in features])\r\nValueError: expected sequence of length 128 at dim 1 (got 94)`",
"Won't the empty block cause you the same error though?",
"The empty entries are removed and aren't there anymore after the mapping with `group_texts` happens. Even though I don't know why the `map` function removes empty entries. \r\n\r\nI tried to replicate it here: \r\n\r\n```python \r\ndef gen():\r\n yield {\"id\": [1,2,3]}\r\n yield {\"id\": [1]}\r\nds = datasets.Dataset.from_generator(gen)\r\ndef remove_short(example):\r\n if len(example['id']) < 2:\r\n example['id'] = []\r\n return example\r\nc = ds.map(remove_short)\r\nfor cc in c:\r\n print (cc)\r\n```\r\nbut got the output: \r\n\r\n```\r\n{'id': [1, 2, 3]} \r\n{'id': []} \r\n```\r\nI don't know why does it removes the empty batch in the training script, and not here. \r\n\r\n",
"We can also just remove the test which is how the script was originally written. The problem is that it then caused issues with users having a small dataset (see #12438).\r\n\r\nSo all in all there is always going to be one group of users not happy since there is no good solution. This script is primarily intended for training, so it just shouldn't be executed on a small dataset :man_shrugging: ",
"This will only fail when the dataset has only one batch, and the total_length < block_size. But I don't think that's a valid case. So, I think we should just adjust for the normal case where we have many batches, but sometimes one batch would be too short that we need to exclude it, instead of returning a batch with shorter length, that will cause an error. ",
"True. So let's go with removing the test at line 496 (and unindent `total_length = (total_length // block_size) * block_size`).",
"Great! \r\nShould I create a new PR, or add a new commit to this one? \r\nShould I do on for the rest of Language modeling examples? ",
"This PR is fine, and yes please treat other examples the same way.",
"yes, I did it pushing now\r\n"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #23424
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@sgugger @ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23429/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23429/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23429",
"html_url": "https://github.com/huggingface/transformers/pull/23429",
"diff_url": "https://github.com/huggingface/transformers/pull/23429.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23429.patch",
"merged_at": 1684434150000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23428
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23428/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23428/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23428/events
|
https://github.com/huggingface/transformers/pull/23428
| 1,714,110,303 |
PR_kwDOCUB6oc5Qudhf
| 23,428 |
Small fixes and link in the README
|
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23428). All of your documentation changes will be reflected on that endpoint."
] | 1,684 | 1,684 | 1,684 |
MEMBER
| null |
Small fixes to the 100k Markdown file and link in the README
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23428/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23428/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23428",
"html_url": "https://github.com/huggingface/transformers/pull/23428",
"diff_url": "https://github.com/huggingface/transformers/pull/23428.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23428.patch",
"merged_at": 1684336057000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23427
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23427/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23427/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23427/events
|
https://github.com/huggingface/transformers/pull/23427
| 1,713,951,530 |
PR_kwDOCUB6oc5Qt6wm
| 23,427 |
TF: embeddings out of bounds check factored into function
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
MEMBER
| null |
# What does this PR do?
Cosmetic PR: the check to confirm that `input_ids` is within bounds, added by me, is annoyingly verbose. Since it is not part of the model code, this PR factors it out into a separate function -- the model code becomes more readable 🤗
The corresponding test passes after these changes (`RUN_SLOW=1 py.test tests/models/ -k test_embeddings_out_of_bounds_raise_exception -vv`)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23427/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23427/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23427",
"html_url": "https://github.com/huggingface/transformers/pull/23427",
"diff_url": "https://github.com/huggingface/transformers/pull/23427.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23427.patch",
"merged_at": 1684339492000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23426
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23426/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23426/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23426/events
|
https://github.com/huggingface/transformers/pull/23426
| 1,713,807,430 |
PR_kwDOCUB6oc5Qtaw3
| 23,426 |
Encoder-Decoder: add informative exception when the decoder is not compatible
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
MEMBER
| null |
# What does this PR do?
Related to https://github.com/huggingface/transformers/issues/23350
Many decoder models are not compatible with our `EncoderDecoder` structure (mostly because using the encoded hidden states appropriately requires extra code in the decoder model architecture itself). This PR adds an informative message in the presence of incompatible decoder models.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23426/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23426/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23426",
"html_url": "https://github.com/huggingface/transformers/pull/23426",
"diff_url": "https://github.com/huggingface/transformers/pull/23426.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23426.patch",
"merged_at": 1684341776000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23425
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23425/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23425/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23425/events
|
https://github.com/huggingface/transformers/issues/23425
| 1,713,791,409 |
I_kwDOCUB6oc5mJmGx
| 23,425 |
Loading LLaMA hf format from local folder is not using GPU in Google Colab
|
{
"login": "algiraldohe",
"id": 55255205,
"node_id": "MDQ6VXNlcjU1MjU1MjA1",
"avatar_url": "https://avatars.githubusercontent.com/u/55255205?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/algiraldohe",
"html_url": "https://github.com/algiraldohe",
"followers_url": "https://api.github.com/users/algiraldohe/followers",
"following_url": "https://api.github.com/users/algiraldohe/following{/other_user}",
"gists_url": "https://api.github.com/users/algiraldohe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/algiraldohe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/algiraldohe/subscriptions",
"organizations_url": "https://api.github.com/users/algiraldohe/orgs",
"repos_url": "https://api.github.com/users/algiraldohe/repos",
"events_url": "https://api.github.com/users/algiraldohe/events{/privacy}",
"received_events_url": "https://api.github.com/users/algiraldohe/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @algiraldohe, thanks for raising this issue. \r\n\r\nTo load the model directly onto the available GPUs, you should pass `device_map='auto'` when loading the model:\r\n\r\n```python\r\nmodel = LlamaForCausalLM.from_pretrained(base_model_path, device_map='auto')\r\n```",
"Thank you for your prompt response I did as you mentioned and I received the following error:\r\n\r\n╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮\r\n│ in <cell line: 5>:5 │\r\n│ │\r\n│ /usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py:2777 in from_pretrained │\r\n│ │\r\n│ 2774 │ │ │ │ mismatched_keys, │\r\n│ 2775 │ │ │ │ offload_index, │\r\n│ 2776 │ │ │ │ error_msgs, │\r\n│ ❱ 2777 │ │ │ ) = cls._load_pretrained_model( │\r\n│ 2778 │ │ │ │ model, │\r\n│ 2779 │ │ │ │ state_dict, │\r\n│ 2780 │ │ │ │ loaded_state_dict_keys, # XXX: rename? │\r\n│ │\r\n│ /usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py:2871 in │\r\n│ _load_pretrained_model │\r\n│ │\r\n│ 2868 │ │ │ ) │\r\n│ 2869 │ │ │ is_safetensors = archive_file.endswith(\".safetensors\") │\r\n│ 2870 │ │ │ if offload_folder is None and not is_safetensors: │\r\n│ ❱ 2871 │ │ │ │ raise ValueError( │\r\n│ 2872 │ │ │ │ │ \"The current `device_map` had weights offloaded to the disk. Please │\r\n│ 2873 │ │ │ │ │ \" for them. Alternatively, make sure you have `safetensors` installe │\r\n│ 2874 │ │ │ │ │ \" offers the weights in this format.\" │\r\n╰──────────────────────────────────────────────────────────────────────────────────────────────────╯\r\nValueError: The current `device_map` had weights offloaded to the disk. Please provide an `offload_folder` for \r\nthem. Alternatively, make sure you have `safetensors` installed if the model you are using offers the weights in \r\nthis format.\r\n\r\nNot quite sure what is requesting for. I tried to set the offload_folder = \"/content/drive/MyDrive/Qbot-gpt/LLaMA-HF\" (Same path of the hf model) like this:\r\n\r\nmodel = LlamaForCausalLM.from_pretrained(base_model_path, device_map='auto' , offload_folder=offload_folder)\r\n\r\nBut no luck yet, still no using the GPU.",
"@algiraldohe Passing the `device_map` argument is actually using the accelerate library to smartly load the model weights to maximise GPU usage. To understand more about how it works, there's a [great doc page here](https://huggingface.co/docs/accelerate/usage_guides/big_modeling) describing how to load large models. There's also [this blog](https://huggingface.co/blog/accelerate-large-models). \r\n\r\nFollowing these, specifying an offload folder should work. Could you try specifying a different folder than the one the model weights are stored in e.g. \r\n\r\n```python\r\nmodel = LlamaForCausalLM.from_pretrained(base_model_path, device_map='auto', offload_folder=\"offload\")\r\n```\r\n\r\nThe GPU RAM is 15 GB, but for e.g. 7B Llama, the weights alone are ~13.5 GB, so offload might be necessary. Let us know if there's still an issue. \r\n\r\nOnce the model can be loaded without error, then we can try and diagnose if the GPU is being properly utilized or not. \r\n\r\n\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,688 | 1,688 |
NONE
| null |
### System Info
Using Google colab:
NAME="Ubuntu"
VERSION="20.04.5 LTS (Focal Fossa)"
GPU = T4
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:33:58_PDT_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0
RAM 12.6 GB
GPU RAM 15GB
Disk 78.2 GB
Packages: (From running !pip list)
torch == 2.0.0+cu118
transformers == 4.29.2
### Who can help?
@ArthurZucker
@younesbelka
### Information
- [x] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I followed the steps provided here: https://huggingface.co/docs/transformers/main/en/model_doc/llama#overview to get an hf format of the LLaMA model that I can use. When I load the model from the output path in my local computer with CPU is working fine, VERY SLOW but fine so I moved to Google Colab in order to use GPU cause I need to fine tune the model after loading it, but when I monitor the resources while loading the model I can see that the GPU is not being used.
from transformers import LlamaForCausalLM, LlamaTokenizer
import torch
#Check if GPU is available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print("Device:", device)
Output:
> Device: cuda
base_model_path = "/content/drive/MyDrive/Qbot-gpt/LLaMA-HF"
model = LlamaForCausalLM.from_pretrained(base_model_path ).to(device)
print_gpu_utilization()
tokenizer = LlamaTokenizer.from_pretrained(base_model_path)
The above code crashes in colab cause is not using the GPU but only the RAM.
This is the content of the folder LLaMA-HF:

The function being used to print the GPU usage:
from pynvml import *
def print_gpu_utilization():
nvmlInit()
handle = nvmlDeviceGetHandleByIndex(0)
info = nvmlDeviceGetMemoryInfo(handle)
print(f"GPU memory occupied: {info.used//1024**2} MB.")
def print_summary(result):
print(f"Time: {result.metrics['train_runtime']:.2f}")
print(f"Samples/second: {result.metrics['train_samples_per_second']:.2f}")
print_gpu_utilization()
### Expected behavior
It should output a loading bar with all the shards loaded:
Loading checkpoint shards: 100%. ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 2/2 [00:32<00:00, 14.09s/it]
And more importantly it should print the amount of GPU that is being used when loading the model:
"GPU memory occupied: 1343 MB."
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23425/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23425/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23424
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23424/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23424/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23424/events
|
https://github.com/huggingface/transformers/issues/23424
| 1,713,788,328 |
I_kwDOCUB6oc5mJlWo
| 23,424 |
group_texts function produces batches with length shorter than block_size
|
{
"login": "BodaSadalla98",
"id": 32247544,
"node_id": "MDQ6VXNlcjMyMjQ3NTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/32247544?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BodaSadalla98",
"html_url": "https://github.com/BodaSadalla98",
"followers_url": "https://api.github.com/users/BodaSadalla98/followers",
"following_url": "https://api.github.com/users/BodaSadalla98/following{/other_user}",
"gists_url": "https://api.github.com/users/BodaSadalla98/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BodaSadalla98/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BodaSadalla98/subscriptions",
"organizations_url": "https://api.github.com/users/BodaSadalla98/orgs",
"repos_url": "https://api.github.com/users/BodaSadalla98/repos",
"events_url": "https://api.github.com/users/BodaSadalla98/events{/privacy}",
"received_events_url": "https://api.github.com/users/BodaSadalla98/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.28.1
- Platform: Linux-3.10.0-1160.76.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.10.0
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 1.12.1+cu102 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
There's a bug in the PyTorch language_modeling examples:
for example: [run_clm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py)
The `group_texts` function is supposed to group the data into batches of equal sizes to `block_size`, and to ignore the small remainder chunk. There's a bug and instead, it inserts the last batch with a smaller size, which creates an error, as the model takes in data with equal sequence length.
## Error: ValueError: expected sequence of length 128 at dim 1 (got 94)
### Expected behavior
The expected behavior is for the function to return data with equal sequence lengths for all batches.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23424/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23424/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23423
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23423/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23423/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23423/events
|
https://github.com/huggingface/transformers/pull/23423
| 1,713,750,501 |
PR_kwDOCUB6oc5QtOH7
| 23,423 |
Fire
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"🔥 ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23423). All of your documentation changes will be reflected on that endpoint."
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23423/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23423/timeline
| null | true |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23423",
"html_url": "https://github.com/huggingface/transformers/pull/23423",
"diff_url": "https://github.com/huggingface/transformers/pull/23423.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23423.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23422
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23422/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23422/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23422/events
|
https://github.com/huggingface/transformers/pull/23422
| 1,713,052,026 |
PR_kwDOCUB6oc5Qq2Yo
| 23,422 |
Fix gradient checkpointing bugs in freezing part of models (requires_grad=False)
|
{
"login": "IrisRainbowNeko",
"id": 31194890,
"node_id": "MDQ6VXNlcjMxMTk0ODkw",
"avatar_url": "https://avatars.githubusercontent.com/u/31194890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IrisRainbowNeko",
"html_url": "https://github.com/IrisRainbowNeko",
"followers_url": "https://api.github.com/users/IrisRainbowNeko/followers",
"following_url": "https://api.github.com/users/IrisRainbowNeko/following{/other_user}",
"gists_url": "https://api.github.com/users/IrisRainbowNeko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IrisRainbowNeko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IrisRainbowNeko/subscriptions",
"organizations_url": "https://api.github.com/users/IrisRainbowNeko/orgs",
"repos_url": "https://api.github.com/users/IrisRainbowNeko/repos",
"events_url": "https://api.github.com/users/IrisRainbowNeko/events{/privacy}",
"received_events_url": "https://api.github.com/users/IrisRainbowNeko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23422). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,684 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
# What does this PR do?
Same as the [PR I opened](https://github.com/huggingface/diffusers/pull/3404) in diffusers.
Using ```torch.utils.checkpoint.checkpoint``` directly will cause the parameters in the checkpoint section to not be learned when part of the model parameters are freezed. As these discussions state:
https://discuss.pytorch.org/t/use-of-torch-utils-checkpoint-checkpoint-causes-simple-model-to-diverge/116271
https://discuss.pytorch.org/t/checkpoint-with-no-grad-requiring-inputs-problem/19117/19
In pytroch versions larger than 1.11.0, the ```use_reentrant=False``` can be added to fix this bug.
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23422/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23422/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23422",
"html_url": "https://github.com/huggingface/transformers/pull/23422",
"diff_url": "https://github.com/huggingface/transformers/pull/23422.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23422.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23421
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23421/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23421/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23421/events
|
https://github.com/huggingface/transformers/pull/23421
| 1,713,005,946 |
PR_kwDOCUB6oc5Qqs2-
| 23,421 |
Return early once stop token is found.
|
{
"login": "ttsugriy",
"id": 172294,
"node_id": "MDQ6VXNlcjE3MjI5NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/172294?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ttsugriy",
"html_url": "https://github.com/ttsugriy",
"followers_url": "https://api.github.com/users/ttsugriy/followers",
"following_url": "https://api.github.com/users/ttsugriy/following{/other_user}",
"gists_url": "https://api.github.com/users/ttsugriy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ttsugriy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ttsugriy/subscriptions",
"organizations_url": "https://api.github.com/users/ttsugriy/orgs",
"repos_url": "https://api.github.com/users/ttsugriy/repos",
"events_url": "https://api.github.com/users/ttsugriy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ttsugriy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
Previously even after finding a stop token, other stop tokens were considered, which is unnecessary and slows down processing.
Currently, this unnecessary overhead is negligible since there are usually 2 stop tokens considered and they are fairly short, but in future it may become more expensive.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23421/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23421/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23421",
"html_url": "https://github.com/huggingface/transformers/pull/23421",
"diff_url": "https://github.com/huggingface/transformers/pull/23421.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23421.patch",
"merged_at": 1684328408000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23420
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23420/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23420/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23420/events
|
https://github.com/huggingface/transformers/pull/23420
| 1,712,996,550 |
PR_kwDOCUB6oc5Qqq7v
| 23,420 |
Fix a typo in HfAgent docstring.
|
{
"login": "ttsugriy",
"id": 172294,
"node_id": "MDQ6VXNlcjE3MjI5NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/172294?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ttsugriy",
"html_url": "https://github.com/ttsugriy",
"followers_url": "https://api.github.com/users/ttsugriy/followers",
"following_url": "https://api.github.com/users/ttsugriy/following{/other_user}",
"gists_url": "https://api.github.com/users/ttsugriy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ttsugriy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ttsugriy/subscriptions",
"organizations_url": "https://api.github.com/users/ttsugriy/orgs",
"repos_url": "https://api.github.com/users/ttsugriy/repos",
"events_url": "https://api.github.com/users/ttsugriy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ttsugriy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes a typo in HfAgent docstring.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23420/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23420/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23420",
"html_url": "https://github.com/huggingface/transformers/pull/23420",
"diff_url": "https://github.com/huggingface/transformers/pull/23420.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23420.patch",
"merged_at": 1684312982000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23419
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23419/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23419/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23419/events
|
https://github.com/huggingface/transformers/issues/23419
| 1,712,983,143 |
I_kwDOCUB6oc5mGgxn
| 23,419 |
Save checkpoint asynchronously on cpu to keep GPU training going
|
{
"login": "ykihong0",
"id": 23263289,
"node_id": "MDQ6VXNlcjIzMjYzMjg5",
"avatar_url": "https://avatars.githubusercontent.com/u/23263289?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ykihong0",
"html_url": "https://github.com/ykihong0",
"followers_url": "https://api.github.com/users/ykihong0/followers",
"following_url": "https://api.github.com/users/ykihong0/following{/other_user}",
"gists_url": "https://api.github.com/users/ykihong0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ykihong0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ykihong0/subscriptions",
"organizations_url": "https://api.github.com/users/ykihong0/orgs",
"repos_url": "https://api.github.com/users/ykihong0/repos",
"events_url": "https://api.github.com/users/ykihong0/events{/privacy}",
"received_events_url": "https://api.github.com/users/ykihong0/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @ykihong0, thanks for raising this issue.\r\n\r\nSo that we can best understand the feature being discussed, is this checkpoint saving when using the `Trainer` class? ",
"Hello @amyeroberts, Thanks for reply.\r\n\r\n> So that we can best understand the feature being discussed, is this checkpoint saving when using the Trainer class?\r\n\r\n-> Yes. I hope the above feature will be supported when the save_model method in Trainer class is called ( https://github.com/huggingface/transformers/blob/v4.27.2/src/transformers/trainer.py#L2718)\r\n\r\n ",
"cc @sgugger ",
"That's not something we have on our roadmap at the moment. Happy to look at a PR if someone wants to integrate this.",
"Thaks @amyeroberts @sgugger for reply\r\nI will give a PR to integrate this feature later.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@ykihong0 hi, could l ask is there any progress on it? Or any replacement on it "
] | 1,684 | 1,693 | 1,688 |
NONE
| null |
### Feature request
I would like to save checkpoints during training as asynchronous
There is no need for training to wait the end of checkpoint saving
### Motivation
As Model size is bigger and bigger, Saving checkpoint during model training takes more time.
The important problem is that two jobs, saving checkpoint and model training, are synchronous; Model training wait until saving checkpoint will finish.
It causes ver expensive GPUs are idle under the process of saving checkpoint .
### Your contribution
I would be happy to submit a PR, might require some help.
Thanks!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23419/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23419/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23418
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23418/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23418/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23418/events
|
https://github.com/huggingface/transformers/issues/23418
| 1,712,913,373 |
I_kwDOCUB6oc5mGPvd
| 23,418 |
Request to upgrade torch version in vision model
|
{
"login": "RissyRan",
"id": 20385466,
"node_id": "MDQ6VXNlcjIwMzg1NDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/20385466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RissyRan",
"html_url": "https://github.com/RissyRan",
"followers_url": "https://api.github.com/users/RissyRan/followers",
"following_url": "https://api.github.com/users/RissyRan/following{/other_user}",
"gists_url": "https://api.github.com/users/RissyRan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RissyRan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RissyRan/subscriptions",
"organizations_url": "https://api.github.com/users/RissyRan/orgs",
"repos_url": "https://api.github.com/users/RissyRan/repos",
"events_url": "https://api.github.com/users/RissyRan/events{/privacy}",
"received_events_url": "https://api.github.com/users/RissyRan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Good catch @RissyRan! Torch cpu didn't exist until PT 1.10 😅, so fine for me to bump this! We can just amend this two lines in the requirements file: https://github.com/huggingface/transformers/blob/a574de302f8538d73c342ee946c2cbf8c64e7a6f/examples/flax/vision/requirements.txt#L5-L8\r\n\r\nWould you like to open a PR to fix this @RissyRan?",
"I made a pull request for this change! thanks!"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
### System Info
Running vision [model](https://github.com/huggingface/transformers/tree/main/examples/flax/vision) on Cloud TPU with JAX version 4.10.0.
### Who can help?
@amyeroberts @sanchit-gandhi
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
pip install --upgrade pip
pip install jax[tpu]==0.4.10 -f https://storage.googleapis.com/jax-releases/libtpu_releases.html
git clone https://github.com/huggingface/transformers.git
cd transformers && pip install .
pip install -r examples/flax/_tests_requirements.txt
pip install --upgrade huggingface-hub urllib3 zipp
pip install tensorflow
pip install -r examples/flax/vision/requirements.txt
```
Meet error as below:
```
ERROR: Could not find a version that satisfies the requirement torch==1.9.0+cpu (from versions: 1.11.0, 1.11.0+cpu, 1.11.0+cu102, 1.11.0+cu113, 1.11.0+cu115, 1.11.0+rocm4.3.1, 1.11.0+rocm4.5.2, 1.12.0, 1.12.0+cpu, 1.12.0+cu102, 1.12.0+cu113, 1.12.0+cu116, 1.12.0+rocm5.0, 1.12.0+rocm5.1.1, 1.12.1, 1.12.1+cpu, 1.12.1+cu102, 1.12.1+cu113, 1.12.1+cu116, 1.12.1+rocm5.0, 1.12.1+rocm5.1.1, 1.13.0, 1.13.0+cpu, 1.13.0+cu116, 1.13.0+cu117, 1.13.0+cu117.with.pypi.cudnn, 1.13.0+rocm5.1.1, 1.13.0+rocm5.2, 1.13.1, 1.13.1+cpu, 1.13.1+cu116, 1.13.1+cu117, 1.13.1+cu117.with.pypi.cudnn, 1.13.1+rocm5.1.1, 1.13.1+rocm5.2, 2.0.0, 2.0.0+cpu, 2.0.0+cpu.cxx11.abi, 2.0.0+cu117, 2.0.0+cu117.with.pypi.cudnn, 2.0.0+cu118, 2.0.0+rocm5.3, 2.0.0+rocm5.4.2, 2.0.1, 2.0.1+cpu, 2.0.1+cpu.cxx11.abi, 2.0.1+cu117, 2.0.1+cu117.with.pypi.cudnn, 2.0.1+cu118, 2.0.1+rocm5.3, 2.0.1+rocm5.4.2)
ERROR: No matching distribution found for torch==1.9.0+cpu
```
### Expected behavior
After upgrading the torch version to `1.11.0+cpu` and torchvision version to `0.12.0+cpu`, it works as expected.
```
pip3 install torch==1.11.0+cpu torchvision==0.12.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23418/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23418/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23417
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23417/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23417/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23417/events
|
https://github.com/huggingface/transformers/pull/23417
| 1,712,835,520 |
PR_kwDOCUB6oc5QqJMt
| 23,417 |
Remove .data usages in optimizations.py
|
{
"login": "alanwaketan",
"id": 8573935,
"node_id": "MDQ6VXNlcjg1NzM5MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8573935?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alanwaketan",
"html_url": "https://github.com/alanwaketan",
"followers_url": "https://api.github.com/users/alanwaketan/followers",
"following_url": "https://api.github.com/users/alanwaketan/following{/other_user}",
"gists_url": "https://api.github.com/users/alanwaketan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alanwaketan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alanwaketan/subscriptions",
"organizations_url": "https://api.github.com/users/alanwaketan/orgs",
"repos_url": "https://api.github.com/users/alanwaketan/repos",
"events_url": "https://api.github.com/users/alanwaketan/events{/privacy}",
"received_events_url": "https://api.github.com/users/alanwaketan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@muellerzr the usage of the `.data` will greatly slow down pytorch/xla on nightly, we were hoping we can fix this issue before the next release.",
"_The documentation is not available anymore as the PR was closed or merged._",
"This is a very old and deprecated implementation since it doesn't even follow the AdamW algorithm exactly. One should use `torch.optim.AdamW` instead, which also has a fused version since pt-2.0.0 which is almost as fast as apex's fused AdamW. So really you shouldn't be using this version anyway.\r\n\r\nThe only reason it was kept is for BC for those who rely on exact results remaining exact after new `transformers` versions are released, otherwise we would have just replaced it with `torch.optim.AdamW` in the first place.\r\n\r\np.s. no objections though to making it better...",
"@stas00 Thanks for the reply. How about the adafactor then?",
"oh, sorry, I didn't see it was Adafactor too. It's hard to see from the diff as it doesn't show the class names.\r\n\r\nThis Adafactor is being used for sure, but its implementation is super old as well. So certainly it'd be a blessing to bring it up to more modern code standard.",
"@stas00 Do you mind give this pr a review? Thanks.",
"Thanks for your PR. Just to be sure though, is this all going to work with PyTorch 1.8+? 1.8 is the minimum version we offically support at the moment (for a couple more weeks at least, then 1.9 starting mid-June).",
"I'm almost 100% sure it is the case. the whole direct `.data` usage deprecation is a few years old at least.\r\n\r\nLet me quickly test it with pt-1.8",
"```\r\n$ pytest tests/optimization/test_optimization.py -k test_adafactor\r\n========================================================== test session starts ===========================================================\r\nplatform linux -- Python 3.8.8, pytest-7.3.1, pluggy-0.13.1\r\nrootdir: /mnt/nvme0/code/huggingface/transformers-master\r\nconfigfile: setup.cfg\r\nplugins: timeout-1.4.2, typeguard-2.12.1, flakefinder-1.0.0, forked-1.3.0, monitor-1.6.0, hypothesis-6.47.0, instafail-0.4.2, xdist-2.2.1\r\ncollected 3 items / 2 deselected / 1 selected\r\n\r\ntests/optimization/test_optimization.py . [100%]\r\n\r\n================================================================= PASSES =================================================================\r\n======================================================== short test summary info =========================================================\r\nPASSED tests/optimization/test_optimization.py::OptimizationTest::test_adafactor\r\n============================================== 1 passed, 2 deselected, 4 warnings in 0.16s ===============================================\r\n\r\n$ pt-ver\r\npt=1.8.2, cuda=11.1, nccl=2708\r\n```\r\n\r\nAt least the Adafactor test that we have is passing.",
"Thanks @sgugger and @stas00 for reviewing the changes."
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
.data usages is deprecated in recent releases of PyTorch. See https://github.com/pytorch/pytorch/issues/91093#issuecomment-1397317273
This change replace all .data usages in optimizations.py with modern alternatives.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@connor-henderson @stas00
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23417/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23417/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23417",
"html_url": "https://github.com/huggingface/transformers/pull/23417",
"diff_url": "https://github.com/huggingface/transformers/pull/23417.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23417.patch",
"merged_at": 1684496511000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23416
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23416/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23416/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23416/events
|
https://github.com/huggingface/transformers/issues/23416
| 1,712,780,841 |
I_kwDOCUB6oc5mFvYp
| 23,416 |
Loading LLM LoRA locally does not update weights
|
{
"login": "JohnnyRacer",
"id": 77214388,
"node_id": "MDQ6VXNlcjc3MjE0Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/77214388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JohnnyRacer",
"html_url": "https://github.com/JohnnyRacer",
"followers_url": "https://api.github.com/users/JohnnyRacer/followers",
"following_url": "https://api.github.com/users/JohnnyRacer/following{/other_user}",
"gists_url": "https://api.github.com/users/JohnnyRacer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JohnnyRacer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JohnnyRacer/subscriptions",
"organizations_url": "https://api.github.com/users/JohnnyRacer/orgs",
"repos_url": "https://api.github.com/users/JohnnyRacer/repos",
"events_url": "https://api.github.com/users/JohnnyRacer/events{/privacy}",
"received_events_url": "https://api.github.com/users/JohnnyRacer/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @pacman100 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"you need to load both the base model weights `pytorch_model.bin` and the adapter model weights `adapter_model.bin`.",
"```\r\nfrom peft import PeftModel\r\nmodel = PeftModel.from_pretrained(model, save_path) #save_path to contain both adapter_config.json and adapter_model.bin\r\n```\r\n\r\nthis worked for me"
] | 1,684 | 1,690 | 1,687 |
NONE
| null |
### System Info
- `transformers` version: 4.28.1
- Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.12.0
- Safetensors version: not installed
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
After LoRA training and saving the model with the following snippet:
```py
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM,GenerationConfig
tokenizer = LlamaTokenizer.from_pretrained('decapoda-research/llama-7b-hf')
model = LlamaForCausalLM.from_pretrained('decapoda-research/llama-7b-hf', device_map="auto", torch_dtype=torch.float16)
from peft import LoraConfig, get_peft_model,TaskType
lora_config = LoraConfig(
r=16,
lora_alpha=32,
target_modules=["q_proj", "v_proj"],
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM"
)
model = get_peft_model(model, lora_config)
# After trainer.train() is complete
model.save_pretrained('./lora_pretrained')
```
Loading LoRA from save directory:
```py
# Loading base model is same as the snippet above
lora_config = LoraConfig.from_pretrained('./lora_pretrained')
model = get_peft_model(model, lora_config)
#The model generates outputs that are the same as the base model.
```
Trying to load the `adapter_model.bin` directly via this snippet results in errors about incompatible weights:
```py
model.load_state_dict(torch.load('./lora_pretrained/adapter_model.bin'))
RuntimeError: Error(s) in loading state_dict for PeftModelForCausalLM:
Missing key(s) in state_dict: "base_model.model.model.embed_tokens.weight", "base_model.model.model.layers.0.self_attn.q_proj.weight", "base_model.model.model.layers.0.self_attn.q_proj.lora_A.weight", "base_model.model.model.layers.0.self_attn.q_proj.lora_B.weight", "base_model.model.model.layers.0.self_attn.k_proj.weight", "base_model.model.model.layers.0.self_attn.v_proj.weight", "base_model.model.model.layers.0.self_attn.v_proj.lora_A.weight", "base_model.model.model.layers.0.self_attn.v_proj.lora_B.weight", "base_model.model.model.layers.0.self_attn.o_proj.weight", "base_model.model.model.layers.0.self_attn.rotary_emb.inv_freq", "base_model.model.model.layers.0.mlp.gate_proj.weight", "base_model.model.model.layers.0.mlp.down_proj.weight", "base_model.model.model.layers.0.mlp.up_proj.weight", "base_model.model.model.layers.0.input_layernorm.weight",
```
### Expected behavior
`LoraConfig.from_pretrained` should load the updated model weights.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23416/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23416/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23415
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23415/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23415/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23415/events
|
https://github.com/huggingface/transformers/pull/23415
| 1,712,779,530 |
PR_kwDOCUB6oc5Qp9FF
| 23,415 |
Use dict.items to avoid unnecessary lookups.
|
{
"login": "ttsugriy",
"id": 172294,
"node_id": "MDQ6VXNlcjE3MjI5NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/172294?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ttsugriy",
"html_url": "https://github.com/ttsugriy",
"followers_url": "https://api.github.com/users/ttsugriy/followers",
"following_url": "https://api.github.com/users/ttsugriy/following{/other_user}",
"gists_url": "https://api.github.com/users/ttsugriy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ttsugriy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ttsugriy/subscriptions",
"organizations_url": "https://api.github.com/users/ttsugriy/orgs",
"repos_url": "https://api.github.com/users/ttsugriy/repos",
"events_url": "https://api.github.com/users/ttsugriy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ttsugriy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
It's more efficient to iterate over key, value dict pairs instead of iterating over keys and performing value lookups on each iteration. It's also more idiomatic.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23415/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23415",
"html_url": "https://github.com/huggingface/transformers/pull/23415",
"diff_url": "https://github.com/huggingface/transformers/pull/23415.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23415.patch",
"merged_at": 1684319129000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23414
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23414/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23414/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23414/events
|
https://github.com/huggingface/transformers/pull/23414
| 1,712,574,337 |
PR_kwDOCUB6oc5QpRm3
| 23,414 |
Fix smdistributed check
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
It turns out the `smdistributed` package does not have metadata so #23163 made this package always seem unavailable. This fixes it.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23414/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23414/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23414",
"html_url": "https://github.com/huggingface/transformers/pull/23414",
"diff_url": "https://github.com/huggingface/transformers/pull/23414.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23414.patch",
"merged_at": 1684264712000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23413
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23413/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23413/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23413/events
|
https://github.com/huggingface/transformers/issues/23413
| 1,712,530,142 |
I_kwDOCUB6oc5mEyLe
| 23,413 |
Generation issues with seq2seq LMs
|
{
"login": "abarbet",
"id": 111083160,
"node_id": "U_kgDOBp7-mA",
"avatar_url": "https://avatars.githubusercontent.com/u/111083160?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abarbet",
"html_url": "https://github.com/abarbet",
"followers_url": "https://api.github.com/users/abarbet/followers",
"following_url": "https://api.github.com/users/abarbet/following{/other_user}",
"gists_url": "https://api.github.com/users/abarbet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abarbet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abarbet/subscriptions",
"organizations_url": "https://api.github.com/users/abarbet/orgs",
"repos_url": "https://api.github.com/users/abarbet/repos",
"events_url": "https://api.github.com/users/abarbet/events{/privacy}",
"received_events_url": "https://api.github.com/users/abarbet/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey @abarbet 👋 \r\n\r\nThis issue may arise when beam search, sampling, and long outputs are used together. A potential bug on PyTorch itself compounds it. You can read the full story in [this issue](https://github.com/huggingface/transformers/issues/22914).\r\n\r\nTL;DR -- my immediate suggestion would be to avoid using `num_beams` and `do_sample` together. If you want to use them both, you'll have to read the issue linked above, which describes the problem and solutions :)",
"Ah thank you, that issue is very helpful! Do you have any idea why we would see a similar error in `trlX` training despite not using beam sampling? I know you don't have access to my training script and also are most likely not familiar with their codebase, so this is a complete longshot.\r\n\r\nThe only thing I can think of if it's not caused by a sampling bug is some kind of destructive learning in the PPO step that causes token distributions to get completely out of whack.",
"@abarbet It may be due to [this PyTorch issue](https://github.com/pytorch/pytorch/issues/48841), where the sampling step may pick very low probability tokens that it shouldn't and, in turn, cause computations to derail.\r\n\r\nTry running your script with PT 1.x instead of 2.0! ",
"> @abarbet It may be due to [this PyTorch issue](https://github.com/pytorch/pytorch/issues/48841), where the sampling step may pick very low probability tokens that it shouldn't and, in turn, cause computations to derail.\r\n> \r\n> Try running your script with PT 1.x instead of 2.0!\r\n\r\nFor me, this issue also occurs with pytorch 1.13.1\r\nhttps://github.com/huggingface/transformers/issues/22914#issuecomment-1562034753",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hello, has a fix been found for this issue? Using the latest version of `transformers` and can confirm that when running inference using `model.generate()` with parameters such as `temperature` and `do_sample` causes this issue.\r\n\r\n```\r\n summary_ids = model.generate(\r\n inputs[\"input_ids\"],\r\n max_length=max_length,\r\n min_length=128,\r\n temperature=0.1,\r\n do_sample=True,\r\n # top_p=0.3\r\n )\r\n```\r\n\r\nedit: can confirm now that `do_sample` and `temperature` is the cause of the issue as `top_p` works fine for me\r\nedit2: I forgot to mention that the model that I'm using is [BRIO](https://github.com/yixinL7/BRIO), loading pre-trained weights from HF",
"@yungsinatra0 The issue should only be gone with the next PT release (i.e. `torch>2.0`)"
] | 1,684 | 1,692 | 1,687 |
NONE
| null |
### System Info
- `transformers` version: 4.27.1
- Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.9.12
- Huggingface_hub version: 0.13.2
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes, parallel (accelerate auto-mapping)
### Who can help?
@ArthurZucker @gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
This has most recently arisen in using `trlX` to do reinforcement learning on `flan-T5`. I wrote an [issue](https://github.com/CarperAI/trlx/issues/468) on their own repo, but there seems to be no response, and it is somewhat more suited to be an issue in this repo since it has to do with `transformers` code at its core.
The main issue is that `generate` with a seq2seq model, namely `flan-t5`, sometimes generates the following error: ```RuntimeError: probability tensor contains either `inf`, `nan` or element < 0```. This has been well documented in other issues like [this one](https://github.com/huggingface/transformers/issues/15169), but the behavior in that issue is more custom than calling `generate` in its standard configuration.
Here is a code example to reproduce:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
m = AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-large", device_map="auto")
t = AutoTokenizer.from_pretrained("google/flan-t5-large")
in_text = """You are a highly intelligent and accurate HVAC domain Resource Description Framework (RDF) data model. You take Passage as input and convert it into HVAC domain RDF triples. A triple is a set of three entities that codifies a statement about semantic data in the form of subject–predicate–object expressions.
Your output format is only [[ subject, predicate, object ], ...] nothing else
Examples:
Input: The HV123 heating unit can supply 50W of power
Output: [[HV123, powerSupply, 50W]]
Input: Unit: ft. (m)
Model | Cooling Mode | Heating Mode
ABC123 | 28.8 (8.8) | 19.0 (5.8)
ABC456 | 28.8 (8.8) | 19.0 (5.8)
ABC789 | 28.8 (8.8) | 21.3 (6.5)
ABC987 | 29.0 (8.9) | 22.9 (7.0)
Output:"""
ins = t(in_text, return_tensors="pt").input_ids.to("cuda")
outs = m.generate(ins, do_sample=True, max_length=512, top_k=0, temperature=0.7, num_beams=2)
```
NB:
`temperature` seems to be one of the main causes of this issue, as removing this kwarg from the generate call does not produce the error in the above case. However, that is not true of all cases. I have seen the error in my `trlX` training loops with kwargs as simple as: `{"max_new_tokens": 512, "do_sample": True, "top_k": 0, "top_p": 1}`. Thus it seems this error is not always related to temperature.
### Expected behavior
The expected behavior in this case would be for the sampling to work every time instead of having strange edge cases where tokens are unreachable.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23413/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23413/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23412
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23412/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23412/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23412/events
|
https://github.com/huggingface/transformers/issues/23412
| 1,712,433,369 |
I_kwDOCUB6oc5mEajZ
| 23,412 |
Load model from cloud storage
|
{
"login": "Dref360",
"id": 8976546,
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dref360",
"html_url": "https://github.com/Dref360",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"repos_url": "https://api.github.com/users/Dref360/repos",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Thanks for opening an issue. There is no plan to support any other platform than the Hugging Face Hub for remote models.",
"That's fair.\n\nJust saw the notice around the deprecation of remote model using HTTP as well.\n\nClosing :) "
] | 1,684 | 1,684 | 1,684 |
NONE
| null |
### Feature request
I would like to load models directly from GCS/AWS
Example if your model is on Google Cloud:
```python
model = AutoModelForSequenceClassification.from_pretrained('gs://my_bucket/my_model')
tokenizer = AutoTokenizer.from_pretrained('gs://my_bucket/my_model')
pipe = pipeline('text-classification', model='gs://my_bucket/my_model')
```
The `datasets` library supports this use case through `fsspec`. I propose to also use this library.
This could also simplify the code of `PretrainedConfig` as everything would use `fsspec` except if it's on the Hub.
### Motivation
I train my models in the cloud and then I load them into another tool such as Azimuth or a local Jupyter notebook for error analysis.
I could simply upload them to the Hub, but I don't want to upload all my models to the Hub or manually upload them.
### Your contribution
I would be happy to submit a PR, might require some help.
Thanks!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23412/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23412/timeline
|
not_planned
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23411
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23411/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23411/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23411/events
|
https://github.com/huggingface/transformers/issues/23411
| 1,712,420,730 |
I_kwDOCUB6oc5mEXd6
| 23,411 |
Generative models return the same responses to all questions
|
{
"login": "serenalotreck",
"id": 41377532,
"node_id": "MDQ6VXNlcjQxMzc3NTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/41377532?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/serenalotreck",
"html_url": "https://github.com/serenalotreck",
"followers_url": "https://api.github.com/users/serenalotreck/followers",
"following_url": "https://api.github.com/users/serenalotreck/following{/other_user}",
"gists_url": "https://api.github.com/users/serenalotreck/gists{/gist_id}",
"starred_url": "https://api.github.com/users/serenalotreck/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/serenalotreck/subscriptions",
"organizations_url": "https://api.github.com/users/serenalotreck/orgs",
"repos_url": "https://api.github.com/users/serenalotreck/repos",
"events_url": "https://api.github.com/users/serenalotreck/events{/privacy}",
"received_events_url": "https://api.github.com/users/serenalotreck/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @Narsil @gante ",
"Hey @serenalotreck 👋 \r\n\r\nThe models you're trying to use are not compatible with the conversational pipeline. That's why you see the same output on a given model, regardless of the input.\r\n\r\nCheck [these docs](https://huggingface.co/docs/transformers/v4.29.1/en/main_classes/pipelines#transformers.ConversationalPipeline): \"The models that this pipeline can use are models that have been fine-tuned on a multi-turn conversational task, currently: ‘microsoft/DialoGPT-small’, ‘microsoft/DialoGPT-medium’, ‘microsoft/DialoGPT-large’. See the up-to-date list of available models on [huggingface.co/models](https://huggingface.co/models?filter=conversational).\" \r\n\r\nP.S.: you might be able to get conversational-like behavior from a standard text generation pipeline, using models like [open assistant](https://huggingface.co/OpenAssistant/oasst-sft-1-pythia-12b), but we don't have step-by-step docs for that at the moment. Check the model card for high-level instructions.",
"@gante that makes sense, thank you!\r\n\r\nI'm currently looking for open source alternatives to GPT-3.5 that I can use with an API for relation extraction through a series of prompts (e.g. \"Rewrite this sentence into multiple sentences, each containing only one relation\", or \"Extract an SPO triple from the following sentence\").\r\n\r\nDo you happen to know if models other than Open Assistant can be used in the same manner? The models in the list in the code example above are all from the search results for Text Generation models, and claim to be open source alternatives to research LLMs, but even using `text-generation` type pipelines, I haven't been able to get responses that mimic what ChatGPT can do, even using GPT-2 (for example, in the Rewrite the Sentence prompt, it just adds to the sentence instead of rewriting), so I suspect I may just be doing something wrong with how I'm building my pipelines. I'll give Open Assistant a shot in the meantime!\r\n\r\nAny thoughts are appreciated, thanks!",
"@serenalotreck you can check [this leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) to see the highest scoring open-source LLMs.\r\n\r\nThe catch is that they need a carefully crafted input prompt (also known as system prompt) before they turn into helpful assistants like ChatGPT. ChatGPT also has it, but it is hidden to you. Here's a [simple example](https://github.com/LAION-AI/Open-Assistant/blob/d1da7db6e9e4c198b6b66a68291e5886db80c7f6/model/model_training/configs/config.yaml#L77), for the case of open assistant -- you may be able to find more online :)\r\n\r\n___________________________________\r\n\r\nAs per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other matters, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) 🤗 ",
"And even more than the system prompt, there is usually a specific token sequence used during the model finetuning, which is critical to get a good output.\r\n\r\nFor instance OpenAssistant biggest model is using \"<|prompt_begin|><|prompter|>somethign something</s><|assistant|>\".\r\nAnd different models use different prompting. Unfortunately at this time there are too many different models released at the same time, and it's impossible to include all of these specific parts everywhere.\r\n\r\nhttps://huggingface.co/chat/ should give you an idea of what OpenAssistant model is capable of.\r\nOpenAssistant has their own front to their models https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwjCpZe2iPz-AhX6hv0HHdo1CsQQjBB6BAgdEAE&url=https%3A%2F%2Fopen-assistant.io%2Fchat&usg=AOvVaw2BLJ_sUF4zgiHZMHNcFVnd",
"Thank you all so much, that's super helpful!!",
"@serenalotreck this link might also be relevant to you: https://github.com/oobabooga/text-generation-webui/tree/main/characters/instruction-following\r\n\r\nIt contains the templates to manipulate specific models"
] | 1,684 | 1,684 | 1,684 |
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core
- Python version: 3.7.16
- Huggingface_hub version: 0.13.3
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Problem: Different questions to a conversational pipeline result in the same answers for a given model. This problem occurs across multiple models, and occurs when a new Python session is initiated between runs.
Code to reproduce:
```python
from transformers import pipeline, Conversation
for model in ['facebook/opt-1.3b', 'bigscience/bloom-560m', 'gpt2']:
generator = pipeline(task='conversational', model=model)
convo = Conversation('Should I see a movie tonight?')
generator(convo)
for model in ['facebook/opt-1.3b', 'bigscience/bloom-560m', 'gpt2']:
generator = pipeline(task='conversational', model=model)
convo = Conversation('What do you know about biology?')
generator(convo)
```
Outputs:
* From the first for loop:
```
Conversation id: 9335b8bb-d73e-4fb0-91e3-bb0dbf62dd76
user >> Should I go see a movie tonight?
bot >> I'm not sure if this is a good idea.
Conversation id: 03c41e56-35b1-4b02-9757-4bf1c90a6f32
user >> Should I go see a movie tonight?
bot >> The first thing you need to do is to get a
The attention mask and the pad token id were not set. As a consequence, you may observe unexpecte
d behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
A decoder-only architecture is being used, but right-padding was detected! For correct generation
results, please set `padding_side='left'` when initializing the tokenizer.
Conversation id: 62e5f6f0-7e6e-4a5c-baf6-93ea40e31b85
user >> Should I see a movie tonight?
bot >> The first time I saw the new Star Wars movie, I
```
* From the second for loop:
```
Conversation id: f14a10d8-3661-482e-8b95-bb0a417a0afd
user >> What do you know about biology?
bot >> I'm not sure if this is a good idea.
Conversation id: 24866d8e-bfc8-4ebf-825e-b90965ab60b7
user >> What do you know about biology?
bot >> The first thing you need to do is to get a good
The attention mask and the pad token id were not set. As a consequence, you may observe unexpecte
d behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
A decoder-only architecture is being used, but right-padding was detected! For correct generation
results, please set `padding_side='left'` when initializing the tokenizer.
Conversation id: 40d35c22-cf89-4750-931e-f75a5d80431b
user >> What do you know about biology?
bot >> The first time I saw the new Star Wars movie, I
```
### Expected behavior
Sensical answers that are in response to the question, rather than an out-of-the-box response that doesn't make sense in context.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23411/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23411/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23410
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23410/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23410/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23410/events
|
https://github.com/huggingface/transformers/issues/23410
| 1,712,376,407 |
I_kwDOCUB6oc5mEMpX
| 23,410 |
TrainingArguments initializer changes `torch.current_device()`
|
{
"login": "joaoareis",
"id": 34096208,
"node_id": "MDQ6VXNlcjM0MDk2MjA4",
"avatar_url": "https://avatars.githubusercontent.com/u/34096208?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joaoareis",
"html_url": "https://github.com/joaoareis",
"followers_url": "https://api.github.com/users/joaoareis/followers",
"following_url": "https://api.github.com/users/joaoareis/following{/other_user}",
"gists_url": "https://api.github.com/users/joaoareis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joaoareis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joaoareis/subscriptions",
"organizations_url": "https://api.github.com/users/joaoareis/orgs",
"repos_url": "https://api.github.com/users/joaoareis/repos",
"events_url": "https://api.github.com/users/joaoareis/events{/privacy}",
"received_events_url": "https://api.github.com/users/joaoareis/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Your expectation is not correct as `TrainingArguments` will take all GPUs available (in this case it will use DataParallel on your model for the training afterward). You need to set the `CUDA_VISIBLE_DEVICES` environment variable to limit the GPUs seen.",
"Oh, I see. Thanks for the answer!"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
### System Info
```
transformers 4.28.1
python 3.9.13
torch 1.12.1
```
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Run
```python
import torch
from transformers import TrainingArguments
torch.cuda.set_device(1)
print(torch.cuda.current_device())
training_args = TrainingArguments(output_dir="output/")
print(torch.cuda.current_device(), training_args.device)
```
Observe
```
1
0 cuda:0
```
### Expected behavior
I would expect that the `torch.current_device()` would not change and therefore to observe
```
1
1 cuda:1
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23410/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23409
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23409/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23409/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23409/events
|
https://github.com/huggingface/transformers/pull/23409
| 1,712,371,193 |
PR_kwDOCUB6oc5QolX9
| 23,409 |
Fix parallel mode check
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,685 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes a check that relies on `distributed_state` when it might not be there
Fixes # (issue)
Should fix https://github.com/huggingface/transformers/issues/23390
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23409/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23409",
"html_url": "https://github.com/huggingface/transformers/pull/23409",
"diff_url": "https://github.com/huggingface/transformers/pull/23409.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23409.patch",
"merged_at": 1684514665000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23408
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23408/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23408/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23408/events
|
https://github.com/huggingface/transformers/pull/23408
| 1,712,292,998 |
PR_kwDOCUB6oc5QoUvB
| 23,408 |
small fix to remove unused eos in processor when it's not used.
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
small fix to remove unused eos in processor when it's not used.
Fix https://github.com/huggingface/transformers/issues/23400
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23408/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23408",
"html_url": "https://github.com/huggingface/transformers/pull/23408",
"diff_url": "https://github.com/huggingface/transformers/pull/23408.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23408.patch",
"merged_at": 1684826857000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23407
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23407/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23407/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23407/events
|
https://github.com/huggingface/transformers/pull/23407
| 1,712,286,574 |
PR_kwDOCUB6oc5QoTXA
| 23,407 |
Fix translation no_trainer
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1936351150,
"node_id": "MDU6TGFiZWwxOTM2MzUxMTUw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Examples",
"name": "Examples",
"color": "d4c5f9",
"default": false,
"description": "Which is related to examples in general"
}
] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger it hasn't and has been failing for some time, due to the fact `num_beams` was `None` (the default in the CLI), and we need to pass it in. When checking the diff/blame there was not anything explicitly that changed in this file to have this happen, however this is the fix that is needed for the test to pass. ",
"Ah ok, got confused because it's slow."
] | 1,684 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes the reason translation has been failing, by adding in the same `num_beams` that were found to be used in the test.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger, cc @ydshieh
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23407/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23407",
"html_url": "https://github.com/huggingface/transformers/pull/23407",
"diff_url": "https://github.com/huggingface/transformers/pull/23407.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23407.patch",
"merged_at": 1684257043000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23406
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23406/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23406/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23406/events
|
https://github.com/huggingface/transformers/pull/23406
| 1,712,285,300 |
PR_kwDOCUB6oc5QoTFp
| 23,406 |
Update 3 docker files to use cu118
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23406). All of your documentation changes will be reflected on that endpoint."
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
A follow up for #23339.
I will try to build the images and run a small subset of tests to make sure this doesn't break (too many) things before merge.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23406/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23406/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23406",
"html_url": "https://github.com/huggingface/transformers/pull/23406",
"diff_url": "https://github.com/huggingface/transformers/pull/23406.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23406.patch",
"merged_at": 1684326410000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23405
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23405/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23405/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23405/events
|
https://github.com/huggingface/transformers/pull/23405
| 1,712,272,847 |
PR_kwDOCUB6oc5QoQej
| 23,405 |
Build with non Python files
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,684 | 1,684 | 1,684 |
COLLABORATOR
| null |
# What does this PR do?
It appears the non python files (such as CUDA kernesl) have disappeared from the built package once again. Fir some reason there were found before with just `*.extension`, but now need `**/*.extension` (although once found again I can remove the **).
This is all super brittle, so this PR also adds:
- a check that the build package contains the non-Python files before we upload it on testpypi
- a check that the library installed does contain the non-Python files before we upload it on pypi
Will make a patch after this is merged.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23405/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23405",
"html_url": "https://github.com/huggingface/transformers/pull/23405",
"diff_url": "https://github.com/huggingface/transformers/pull/23405.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23405.patch",
"merged_at": 1684261391000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23404
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23404/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23404/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23404/events
|
https://github.com/huggingface/transformers/issues/23404
| 1,712,266,706 |
I_kwDOCUB6oc5mDx3S
| 23,404 |
Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.
|
{
"login": "tanhanzhuo",
"id": 92439758,
"node_id": "U_kgDOBYKEzg",
"avatar_url": "https://avatars.githubusercontent.com/u/92439758?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tanhanzhuo",
"html_url": "https://github.com/tanhanzhuo",
"followers_url": "https://api.github.com/users/tanhanzhuo/followers",
"following_url": "https://api.github.com/users/tanhanzhuo/following{/other_user}",
"gists_url": "https://api.github.com/users/tanhanzhuo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tanhanzhuo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanhanzhuo/subscriptions",
"organizations_url": "https://api.github.com/users/tanhanzhuo/orgs",
"repos_url": "https://api.github.com/users/tanhanzhuo/repos",
"events_url": "https://api.github.com/users/tanhanzhuo/events{/privacy}",
"received_events_url": "https://api.github.com/users/tanhanzhuo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @tanhanzhuo, thanks for raising this issue! \r\n\r\nIs seems like there's a connection issue when trying to download from the hub. I've just tested locally by clearing my cache and forcing a download with `AutoTokenizer.from_pretrained('bert-base-uncased')`, which ran successfully. Have you tested the internet connection in the running environment? If so, is this only seen when downloading from the hugging face hub? ",
"> Hi @tanhanzhuo, thanks for raising this issue!\r\n> \r\n> Is seems like there's a connection issue when trying to download from the hub. I've just tested locally by clearing my cache and forcing a download with `AutoTokenizer.from_pretrained('bert-base-uncased')`, which ran successfully. Have you tested the internet connection in the running environment? If so, is this only seen when downloading from the hugging face hub?\r\n\r\nThank you for reply! The error exists for around 2 hours, but now everything works well again.\r\n\r\nDuring the error, I could load the model by setting local_files_only=True, but cannot download from the hub. Guess some strange bug occurred",
"Sorry to ask a little more. I used to meet the same problem as mentioned above. But I found that this error is not always in that way. \r\n\r\nFor example, if I want to initialize a pretrained stable diffusion model according to the demo code. The first several trials (usually around 4-5 times) will encounter this error. But if you keep on trying, it runs without any errors. \r\n\r\nI am not sure if it is possible to enhance the pretrained weights loading code, to support try downloading the weights several times, so that the user with this error can set a relatively large times to try on downloading the weights. In that way, someone with this, maybe so-called **random** `ConnectionError`, can avoid it. :pray:",
"> Sorry to ask a little more. I used to meet the same problem as mentioned above. But I found that this error is not always in that way.\r\n> \r\n> For example, if I want to initialize a pretrained stable diffusion model according to the demo code. The first several trials (usually around 4-5 times) will encounter this error. But if you keep on trying, it runs without any errors.\r\n> \r\n> I am not sure if it is possible to enhance the pretrained weights loading code, to support try downloading the weights several times, so that the user with this error can set a relatively large times to try on downloading the weights. In that way, someone with this, maybe so-called **random** `ConnectionError`, can avoid it. 🙏\r\n\r\nSame here, just kept trying and everthing went well. 🤣",
"I'm having the same problem as you",
"@tanhanzhuo ",
"just retry multiple times, it works.",
">Tks, I retry multiple times, it finally works.\r\n\r\n",
"try so many times"
] | 1,684 | 1,693 | 1,684 |
NONE
| null |
### System Info
irrespect of version
AutoTokenizer raises connection error
AutoTokenizer.from_pretrained('bert-base-uncased') takes forever and raise the connection error
@ArthurZucker
thx
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
AutoTokenizer.from_pretrained('bert-base-uncased')
### Expected behavior
expect return the tokenizer, but got connection error:
'HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /bert-base-uncased/resolve/main/tokenizer_config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f0e814eacd0>, 'Connection to huggingface.co timed out. (connect timeout=10)'))' thrown while requesting HEAD https://huggingface.co/bert-base-uncased/resolve/main/tokenizer_config.json
'HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /bert-base-uncased/resolve/main/config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f0e91bc2d00>, 'Connection to huggingface.co timed out. (connect timeout=10)'))' thrown while requesting HEAD https://huggingface.co/bert-base-uncased/resolve/main/config.json
Traceback (most recent call last):
File "/opt/conda/lib/python3.9/site-packages/transformers/utils/hub.py", line 409, in cached_file
resolved_file = hf_hub_download(
File "/opt/conda/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 124, in _inner_fn
return fn(*args, **kwargs)
File "/opt/conda/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1211, in hf_hub_download
raise LocalEntryNotFoundError(
huggingface_hub.utils._errors.LocalEntryNotFoundError: Connection error, and we cannot find the requested files in the disk cache. Please try again or make sure your Internet connection is on.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23404/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23404/timeline
|
completed
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.