repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
β | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 14,236 | closed | Added an error when `temperature` is defined but `do_sample` is False. | # What does this PR do?
This is to prevent the possibility that people think that setting a temperature is enough to get sampling behavior.
I found this confusing at first.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case): No, but is kind of a similar idea, in that it makes things more clear.
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section? Quickly I must admit.
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? No.
- [N/A] Did you make sure to update the documentation with your changes? N/A, this is to make implicit behavior more clear.
- [ ] Did you write any new necessary tests?: N/A
## Who can review?
@patrickvonplaten | 11-02-2021 01:29:50 | 11-02-2021 01:29:50 | @patrickvonplaten Added as an edit, not sure if it pinged you.<|||||>The test that fails has nothing to do with this change<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,234 | closed | Trainer bug in BertForQuestionAnswering when Pretrained model wasn't trained on NPS | ## Environment info
`transformers` version: 4.12.2
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.5
- PyTorch version (GPU?): 1.10.0+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
not sure, it more to report on an unintuitive behaviour
@LysandreJik (not sure its only in BERT or all across the othe model - really depends how they were trained )
- Trainer: @sgugger
The problem arises when using:
my own modified scripts:
```
#some data prep...
BASE_MODEL = 'onlplab/alephbert-base'
train_dataset = SquadDataset(train_encodings)
val_dataset = SquadDataset(val_encodings)
model = BertForQuestionAnswering.from_pretrained(BASE_MODEL)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=val_dataset
)
```
But it wouldnt happen when I replace the bert model to `'bert-base-cased'` for example or any othe rbert model which was trained on NSP or specifically in its config as the ` "type_vocab_size": **2**` (and not 1)
The error you get is `Embedding Error Index out of Range in self` when you give `1` in some of indices in the `token_type_ids `embedding input.
----
the unintuitive part that it doesn't happen on the same input, when replacing the `Trainer` code with the traditional training script - when it only give `0` to all `token_type_ids` which seem to be a bug in the `BertForQuestionAnswering` or at list it is very un intuitive especially today when people some time don't use this task when training Bert and use Work Masking.
The following script work when giving `0` to all `token_type_ids` :
```
from torch.utils.data import DataLoader
from transformers import AdamW
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
model.to(device)
model.train()
train_loader = DataLoader(train_dataset, batch_size=1, shuffle=True)
optim = AdamW(model.parameters(), lr=5e-5)
for epoch in range(3):
for batch in tqdm(train_loader):
optim.zero_grad()
input_ids = batch['input_ids'].to(device)
attention_mask = batch['attention_mask'].to(device)
start_positions = batch['start_positions'].to(device)
end_positions = batch['end_positions'].to(device)
outputs = model(input_ids, attention_mask=attention_mask, start_positions=start_positions, end_positions=end_positions)
loss = outputs[0]
loss.backward()
optim.step()
model.eval()
```
## To reproduce
Steps to reproduce the behavior:
1.train any QA (squad like) dataset on Bert using a model with "type_vocab_size": **1**`
2. fail to tain using the first snippet (with the trainer)
3. succeed training using the "classic" snippet
## Expected behavior
mentioned above... | 11-01-2021 21:44:49 | 11-01-2021 21:44:49 | If you just drop the `token_type_ids` key from your dataset, or replace them all by zeros, then the training will succeed, so this is not a bug in the `Trainer`, just a mistake in the processing of the dataset.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,233 | closed | Cl tohoku japanese roberta | null | 11-01-2021 20:16:39 | 11-01-2021 20:16:39 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, @LysandreJik
I deeply appreciate your help on fixing CI.
Is there anything I can do for this?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Sorry for taking a while to do this - I'll push an update to get it up to date so that we may merge it in the coming days.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,232 | closed | improving efficiency of mlflow metric logging | Signed-off-by: Walter Martin <[email protected]>
# What does this PR do?
This PR switches MLflow metric logging over to the more efficient log_metrics API, which logs in a batch.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-01-2021 17:25:27 | 11-01-2021 17:25:27 | @sgugger , @jingyanwangms , @harshithapv , I've moved the PR to the original repository as suggested. A review would be great! |
transformers | 14,231 | closed | Re: Add Japanese RoBERTa Model | I have re-created PR as CI does not seem to trigger as expected.
The original PR is #13065
| 11-01-2021 13:35:14 | 11-01-2021 13:35:14 | @LysandreJik I hope that CI runs this time.
I will soon add missing documents π <|||||>Hey @butsugiri! For some reason the tests still didn't run, but I could push a branch and open a PR from my personal fork with your commits here: https://github.com/huggingface/transformers/pull/14233
This triggered the tests and ran them on your latest commit which now shows here (yay!).<|||||>@LysandreJik Thank you for your help! I close this PR.<|||||>...or should I keep this open? Sorry that I am confused π
I have just added documents and made few modification to address CI errors.<|||||>@butsugiri, yes, please keep this open! I'll make sure to force the CI runs on it :)<|||||>Hey @butsugiri, it seems that most of the issues in the tests could be solved with a rebase on `master`. Could you do it? I'm happy to do it too, but I'll need you to invite me to your repository so that I have commit rights.<|||||>Hi, @LysandreJik . Thanks again for your message.
I have invited you to our fork repository with `write` access.
Since I am unfamiliar with rebase command, could you do it, please? Thanks!<|||||>Thank you for the invitation! I have just rebased on `master`, and forced pushed your branch. Let's see if the tests succeed!<|||||>There are a few tests failing, do you manage to see the output of these tests when clicking on "Details"? I believe it's mostly due to the `fugashi` missing installation. Would you like me to take a look at that issue as well, or would you like to take a stab at it? :)<|||||>Thanks for rebasing and pushing! I confirmed that I can see the failed CIs from "Details".
Regarding the missing `fugashi` installation: do you have any ideas for solving this issue?
If I understood correctly, it seems that similar JapaneseBERT Tokenizer is ignoring the test when `fugashi` is unavailable.
https://github.com/huggingface/transformers/blob/master/tests/test_tokenization_bert_japanese.py#L124-L127
However, I suppose that there may be a better way π€
<|||||>That's the favored way, and we can install fugashi in a run such as this one: https://github.com/huggingface/transformers/blob/master/.circleci/config.yml#L516
This fits perfectly, as this too is a custom tokenizer :)<|||||>I see that there is already a CI config for custom tokenizer π
According to your link, pip install should be running with `ja` option, which includes the installation of `fugashi` https://github.com/huggingface/transformers/blob/master/setup.py#L234
Would you please take a look at fugashi installation failure?
<|||||>Let's see, I'm re-running the CI<|||||>Tried to fix it here, will see: https://github.com/huggingface/transformers/pull/14233<|||||>`do_zenkaku=True` seems harmful when it is used with `[MASK]` or `<mask>` on `FillMaskPipeline`. How do you think @butsugiri ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,230 | closed | CoNLL2003 ner_tags order mismatch between the dataset from HF and the pretrained model | @patrickvonplaten @dslim23
@dslim23 's pretrained models such as:
https://huggingface.co/dslim/bert-base-NER
have the following NER tag order baked in:
`"O", "B-MISC", "I-MISC", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC"`
while the https://huggingface.co/datasets/conll2003 dataset has:
`O (0), B-PER (1), I-PER (2), B-ORG (3), I-ORG (4) B-LOC (5), I-LOC (6) B-MISC (7), I-MISC (8)`
The mismatch leads to defunct accuracy measurements out of the box for the pretrained NER models; try, for instance:
`python examples/pytorch/token-classification/run_ner.py --model_name_or_path dslim/bert-base-NER --dataset_name conll2003 --output_dir /tmp/test-ner --do_eval`
| 11-01-2021 13:29:18 | 11-01-2021 13:29:18 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@julien-c - another good example for HF Hub issue support haha<|||||>@vshampor - let's see what @dslim23 thinks :-)<|||||>@patrickvonplaten thanks for the ping β though in the case it's the script that should be able to remap labels no? The model looks correctly defined with https://huggingface.co/dslim/bert-base-NER/blob/main/config.json<|||||>(a bit like the roberta case that's hardcoded in example scripts when it should be supported automatically?)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Ok sorry only took a closer look now. I think what happened here is that the model was trained with an older version of `run_ner.py` where the `id2label` was not adapted to the dataset yet which then leads to problems when one only wants to evaluate the model. @sgugger - should we maybe add some logic like:
"If --do_train is False and config has id2label list then use that" instead of forcing to overwrite it depending on the dataset?<|||||>That logic is in the text classification examples but not the token classification examples indeed. Will try to fix this during the week (or did you want to do it?)<|||||>> That logic is in the text classification examples but not the token classification examples indeed. Will try to fix this during the week (or did you want to do it?)
Happy to take it over - I have some free time this week<|||||>Go ahead then :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,229 | closed | How can i add a Bi-LSTM layer on top of some model?like xlm_roberta | I'm using pytorch and I'm using the base pretrained bert to classify sentences for hate speech. I want to implement a Bi-LSTM layer that takes as an input all outputs of the latest transformer encoder from the bert model as a new model (class that implements nn.Module), and i got confused with the nn.LSTM parameters. | 11-01-2021 13:21:15 | 11-01-2021 13:21:15 | Hi,
Please ask training-related questions on our [forum](https://discuss.huggingface.co/) instead of here, as we'd like to keep Github issues for bugs/feature requests.
Thanks! |
transformers | 14,228 | closed | input_ids is None when evaluating | Information
=====
When evaluating model checkpoints, I find that input_ids passed to the model is None, but encoder_outputs are available instead. I'd like to obtain the value of input_ids, what should I do? Thanks.
To reproduce
=====
run the following command:
python run_summarization.py \
--model_name_or_path facebook/bart-large \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
and print(input_ids) in the forward functions of the model. | 11-01-2021 11:34:30 | 11-01-2021 11:34:30 | I find that evaluating will run a bart encoder in advance, I add a self.input_ids = input_ids in the encoder, and then can obtain it easily. |
transformers | 14,227 | closed | Can't access Huggingface page and cant download models from laptop. Can it be IP ban? | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.3
- Platform: Darwin-20.6.0-x86_64-i386-64bit
- Python version: 3.7.9
- PyTorch version (GPU?): 1.9.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
@LysandreJik
### Error
ERR_NAME_NOT_RESOLVED

Dev tools response when loading the page

Cannot download pre-trained models from python
`ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.`
When I access the page from another device connected to the same network, it works. I could access the site few days back. | 11-01-2021 11:05:36 | 11-01-2021 11:05:36 | Hello! We don't IP ban. Could it be linked to your OS's DNS cache? cc @Pierrci <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,226 | closed | Fixing typo in error message. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@nielsRogge
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-01-2021 10:46:19 | 11-01-2021 10:46:19 | @NielsRogge friendly ping. |
transformers | 14,225 | closed | ChunkPipeline (batch_size enabled on `zero-cls` and `qa` pipelines. | # What does this PR do?
It enables `batch_size` on the two pipeline where it wasn't possible previously.
The main roadblock for these is that `preprocess` would already return tensors that would contain part of the `batch_size`, meaning we couldn't use `Pipeline.batch_size` independently of the processed data. Both concepts were not orthogonal.
It also means there was no way to limit the size of the batch on a very large input item, by chunking the inputs being fed to the model.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
The problem is being solved by changing some assumptions about the format of `Pipeline`, and by creating `ChunkPipeline` (since we didn't want to change ALL pipelines for these two).
This modifications should yield exactly 0 breaking changes, and really just enable the batching for these two pipelines.
They differ from `Pipeline` simply because `preprocess` will not return just one item, but an iterator instead (SquadFeatures for `qa`, and a list of [sentence, hypothesis] for `zero-shot`).
```python
item = ...
model_inputs = pipe.preprocess(item)
model_outputs = pipe._forward(model_inputs)
outputs = pipe.postprocess(model_outputs)
```
Now becomes:
```python
item = ...
all = []
for model_inputs in pipe.preprocess(item):
model_outputs = pipe._forward(model_inputs)
all.append(model_outputs)
output = pipe.postprocess(all)
```
When doing batching this becomes a bit more convoluted since we need to keep track of 2 things, where we are in the batch currently, and do we still need to accumulate in the `all` variable or not.
Everything is handled in iterators but it's equivalent to:
```python
item = ....
batch = []
all_model_outputs = []
for item in all_items:
for model_inputs in pipe.preprocess(item):
batch.append(model_inputs)
if len(batch) != batch_size:
continue
model_outputs = pipe._forward(collate(batch))
batch = []
for model_output in model_outputs:
if model_outputs["is_last"]:
output = model.postprocess(all_model_outputs)
all_model_outputs = []
yield output
else:
all_model_outputs.append(model_output)
```
We are doing that using a special `is_last` boolean value that needs to be sent by the `preprocess` function and passed along (as a tensor) by the `_forward` function. That's because it's a bit hard for the iterators to deduce that information on their own if it's not propagated by the `_forward` function. Making it explicit makes reasoning about this clearer as the pipeline is not responsible both for sending it and passing it along (it's still magically accumulated back as a list before sending to `postprocess`).
This could be entirely magically inferred but it feels the complexity incurred by not be worth it (it's already complex enough).
<table>
<tr>
<td>
Pipeline
<img src="https://user-images.githubusercontent.com/204321/140971945-5e20a775-c291-4640-8c2e-f349337a4fea.jpeg" width=200>
</td>
<td>
Pipeline + batch_size
<img src="https://user-images.githubusercontent.com/204321/140971950-6ebabf33-6e93-4975-b397-26d640aa6321.jpeg" width=200>
</td>
</tr>
<tr>
<td>
ChunkPipeline
<img src="https://user-images.githubusercontent.com/204321/140971957-e9278c13-0847-4c64-85c9-3e03fbb66e9b.jpeg" width=200>
</td>
<td>
ChunkPipeline + batch_size
<img src="https://user-images.githubusercontent.com/204321/140971955-550ee033-e26e-41f4-b81f-28bb2195879b.jpeg" width=200>
</td>
</tr>
</table>
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-01-2021 10:12:47 | 11-01-2021 10:12:47 | @LysandreJik I split this PR in 2, were 1 holds the ASR part, and this just enables `batch_size` for QA and zero-shot.
It has benefits, but also increases complexity of the overall code. Not 100% sure about this.<|||||>Ok thanks, this is a big PR, will review it in the coming days.<|||||>@LysandreJik I added a bunch of documentation as comments for now.
When this pass it could be helpful to add stuff in the main documentation (mainly for pipeline creation).
<|||||>Thanks for adding comments, this looks good. Would you like to merge it now or to include the documentation changes in it before merging?<|||||>I want to add some documentation for this now<|||||>@LysandreJik I think it's better.
For the documentation, I kept it slightly minimal. Actually I think part of the documentation on `pipeline` belongs to this page https://huggingface.co/docs/transformers/task_summary but it seems that this page talks about a little more than just the pipeline, and also is missing quite a few tasks (and there's also the great work of @merveenoyan on tasks https://github.com/huggingface/huggingface_hub/pull/509 that we might want to link to and/or refer) |
transformers | 14,224 | closed | Tensor location is already handled | in `base.py` not in subclasses.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-01-2021 10:11:48 | 11-01-2021 10:11:48 | |
transformers | 14,223 | closed | Fixing `image-segmentation` tests. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-01-2021 09:04:17 | 11-01-2021 09:04:17 | |
transformers | 14,222 | closed | Bart model converted ONNX inference | Hi, I followed the instructions to convert BART-LARGE-CNN model to ONNX here (https://github.com/huggingface/transformers/blob/master/docs/source/serialization.rst) using transformers.onnx script. The model was exported fine and I can run inference.
However, the results of the inference, from the 'last_hideen_state' are in logits (I think)? How can I parse this output for summarization purposes?
Here are screenshots of what I've done.

This is the resulting output from those two states:

| 11-01-2021 03:34:41 | 11-01-2021 03:34:41 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @ZiyueWangUoB by default, the `transformers.onnx` package exports models using the `--features=default` flag. This corresponds to exporting an `AutoModel` topology, but since you're interested in summarization, you'll want to use the `seq2seq-lm` features that export an `AutoModelForSeq2SeqLM` topology.
This topology is not currently support for BART, but will be once #14358 is merged.
This will allow you to run:
```python
python -m transformers.onnx --model=facebook/bart-large-cnn --features=seq2seq-lm onnx/
```
which will produce and ONNX model whose outputs are `logits` instead of `last_hidden_state` and `encoder_last_hidden_state`. You will still have to implement your own algorithm for text generation (e.g. beam search), so you might be interested in checking out this [example](https://github.com/huggingface/transformers/tree/master/examples/onnx/pytorch/summarization) which does that.
FYI you can find the model's output names from the ONNX config, e.g.
```python
from transformers import AutoConfig, AutoModelForSeq2SeqLM
from transformers.models.bart import BartOnnxConfig
model_ckpt = "facebook/bart-large-cnn"
config = AutoConfig.from_pretrained(model_ckpt)
onnx_config = BartOnnxConfig(config, task="default")
onnx_config.outputs
# OrderedDict([('last_hidden_state', {0: 'batch', 1: 'sequence'}),
# ('encoder_last_hidden_state', {0: 'batch', 1: 'sequence'})])
```
<|||||>If i wish to use a [distilbart model](https://huggingface.co/sshleifer/distilbart-cnn-6-6) could i use the linked example directly for beam search? Also the linked issued #14358 has been merged, and i tried using the `--features=seq2seq-lm ` flag but I got the following error message:
`ValueError: bart doesn't support feature seq2seq-lm. Supported values are: ['default']`<|||||>Hey @sorenmc AFAIK the linked example should work with DistilBART, but please open a new issue if it doesn't.
Regarding #14358 we had to revert it to handle some issues in the tests. The new PR to track is #14700 <|||||>For anyone in my position: I still have not tried this, but will give an update here when i have!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @lewtun
> Hey @ZiyueWangUoB by default, the `transformers.onnx` package exports models using the `--features=default` flag. This corresponds to exporting an `AutoModel` topology, but since you're interested in summarization, you'll want to use the `seq2seq-lm` features that export an `AutoModelForSeq2SeqLM` topology.
>
> This topology is not currently support for BART, but will be once #14358 is merged.
>
> This will allow you to run:
>
> ```python
> python -m transformers.onnx --model=facebook/bart-large-cnn --features=seq2seq-lm onnx/
> ```
>
> which will produce and ONNX model whose outputs are `logits` instead of `last_hidden_state` and `encoder_last_hidden_state`. You will still have to implement your own algorithm for text generation (e.g. beam search), so you might be interested in checking out this [example](https://github.com/huggingface/transformers/tree/master/examples/onnx/pytorch/summarization) which does that.
>
> FYI you can find the model's output names from the ONNX config, e.g.
>
> ```python
> from transformers import AutoConfig, AutoModelForSeq2SeqLM
> from transformers.models.bart import BartOnnxConfig
>
> model_ckpt = "facebook/bart-large-cnn"
> config = AutoConfig.from_pretrained(model_ckpt)
> onnx_config = BartOnnxConfig(config, task="default")
> onnx_config.outputs
> # OrderedDict([('last_hidden_state', {0: 'batch', 1: 'sequence'}),
> # ('encoder_last_hidden_state', {0: 'batch', 1: 'sequence'})])
> ```
Hello, @lewtun I am trying the same scenario, The example guide URL for beam_search is returning 404. (https://github.com/huggingface/transformers/tree/master/examples/onnx/pytorch/summarization) Can you please post the latest URL<|||||>> For anyone in my position: I still have not tried this, but will give an update here when i have!
Hey @sorenmc, If you have tried this approach.. can you please attach a code snippet here. It will be mighty helpful<|||||>Hi, @mohanvamsibu-kore summarization example was moved here: https://github.com/huggingface/transformers/tree/master/examples/research_projects/onnx/summarization<|||||>Hi, @TonyMas Thank You. I have implemented summarization from the model "lidiya/bart-large-xsum-samsum", the ONNX model was extremely fast but, I see that the beam_search is very slow which is taking a major chunk of the time (~ 9 secs ) in CPU. I tried with greedy search as well, which is taking ~3-4 secs. so,
1. Is there a way to optimize beam_search?
2. Can I run greedy_search on GPU? If Yes, Please let me know the steps<|||||>@TonyMas Can you please help me with the above concerns? I have also tried with the same example provided under https://github.com/huggingface/transformers/tree/master/examples/research_projects/onnx/summarization. It took ~10 secs on GPU. for the input of ~1000 characters. Please let me know if I can reduce the time<|||||>Hey @mohanvamsibu-kore, I am also interested in exporting `lidiya/bart-large-xsum-samsum` on ONNX. I would love to see your code and see how we can speed it up. Can you share the code? <|||||>> > For anyone in my position: I still have not tried this, but will give an update here when i have!
>
> Hey @sorenmc, If you have tried this approach.. can you please attach a code snippet here. It will be mighty helpful
Sorry have been on vacation, and i have sadly not had the time <|||||>I have been testing the [Bart + Beam Search to ONNX](https://github.com/huggingface/transformers/tree/master/examples/research_projects/onnx/summarization) example but it seems that the [attention_mask layer is fixed](https://github.com/huggingface/transformers/blob/c1aaa439350051acdcd585946e91525502a6b063/examples/research_projects/onnx/summarization/run_onnx_exporter.py#L134) to the sample input used when exporting the model. Setting it up like the inputs_ids in the dynamic_axes fix the issue.
The point is that testing the model with some texts returns pretty much the same tokens from the input text. Do you have the same experience? We really need this [feature](https://github.com/huggingface/optimum/issues/55) from optimum, any updates on this?<|||||>Hey @jspablo we're currently discussing internally on the best approach for supporting text generation and other inference tasks within `optimum`. We don't have a timeline on this yet, but I'll report back once we have a clearer picture on this.
cc @philschmid @mfuntowicz <|||||>any update?<|||||>Yes see: https://huggingface.co/docs/optimum/main/en/onnxruntime/modeling_ort#optimum.onnxruntime.ORTModelForSeq2SeqLM<|||||>> I have been testing the [Bart + Beam Search to ONNX](https://github.com/huggingface/transformers/tree/master/examples/research_projects/onnx/summarization) example but it seems that the [attention_mask layer is fixed](https://github.com/huggingface/transformers/blob/c1aaa439350051acdcd585946e91525502a6b063/examples/research_projects/onnx/summarization/run_onnx_exporter.py#L134) to the sample input used when exporting the model. Setting it up like the inputs_ids in the dynamic_axes fix the issue. The point is that testing the model with some texts returns pretty much the same tokens from the input text. Do you have the same experience? We really need this [feature](https://github.com/huggingface/optimum/issues/55) from optimum, any updates on this?
Found a fix for this yet?<|||||>> Yes see: https://huggingface.co/docs/optimum/main/en/onnxruntime/modeling_ort#optimum.onnxruntime.ORTModelForSeq2SeqLM
Updated link: https://huggingface.co/docs/optimum/main/en/onnxruntime/package_reference/modeling_ort#optimum.onnxruntime.ORTModelForSeq2SeqLM
This allows to basically do the inference with ONNX Runtime, while still using the `generate()` from PyTorch:
```python
from transformers import AutoTokenizer
from optimum.onnxruntime import ORTModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-xsum")
# instead of: `model = AutoModelForSeq2SeqLM.from_pretrained("facebook/bart-large-xsum")`
# the argument `from_transformers=True` handles the ONNX export on the fly.
model = ORTModelForSeq2SeqLM.from_pretrained("facebook/bart-large-xsum", from_transformers=True, use_cache=True)
to_summarize = "The Bart model was proposed in BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer on 29 Oct, 2019."
inputs = tokenizer(to_summarize, return_tensors="pt")
gen_tokens = model.generate(**inputs)
outputs = tokenizer.batch_decode(gen_tokens)
print(outputs)
# prints: ['</s>A new model for training artificial intelligence systems has been proposed by a group of researchers at the University of Oxford.</s>']
```
Alternatively, you can [export the model](https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model) offline and load it later:
```
optimum-cli export onnx --model facebook/bart-large-xsum --task seq2seq-lm-with-past --for-ort bart_onnx/
```<|||||>> > Yes see: https://huggingface.co/docs/optimum/main/en/onnxruntime/modeling_ort#optimum.onnxruntime.ORTModelForSeq2SeqLM
>
> Updated link: https://huggingface.co/docs/optimum/main/en/onnxruntime/package_reference/modeling_ort#optimum.onnxruntime.ORTModelForSeq2SeqLM
>
> This allows to basically do the inference with ONNX Runtime, while still using the `generate()` from PyTorch:
>
> ```python
> from transformers import AutoTokenizer
> from optimum.onnxruntime import ORTModelForSeq2SeqLM
>
> tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-xsum")
>
> # instead of: `model = AutoModelForSeq2SeqLM.from_pretrained("facebook/bart-large-xsum")`
> # the argument `from_transformers=True` handles the ONNX export on the fly.
> model = ORTModelForSeq2SeqLM.from_pretrained("facebook/bart-large-xsum", from_transformers=True, use_cache=True)
>
> to_summarize = "The Bart model was proposed in BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer on 29 Oct, 2019."
>
> inputs = tokenizer(to_summarize, return_tensors="pt")
>
> gen_tokens = model.generate(**inputs)
> outputs = tokenizer.batch_decode(gen_tokens)
> print(outputs)
> # prints: ['</s>A new model for training artificial intelligence systems has been proposed by a group of researchers at the University of Oxford.</s>']
> ```
>
> Alternatively, you can [export the model](https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model) offline and load it later:
>
> ```
> optimum-cli export onnx --model facebook/bart-large-xsum --task seq2seq-lm-with-past --for-ort bart_onnx/
> ```
thought the main selling point of using ONNX is speed. but the inference using ORTModelForSeq2SeqLM:
`model.generate(**inputs)`
is 2x slower than inference using a pipeline:
`pipeline("summarization", model="facebook/bart-large-xsum")`
can you please elaborate on why this is the case? is there some magic happening inside pipeline()?<|||||>Could you give me your transformers and optimum versions? There is a critical bug if you use transformers==4.26 and optimum==1.6.3, it has been fixed in the 1.6.4 release.
If you would like to open an issue in Optimum repo with a reproducible script, I can have a look from there! |
transformers | 14,221 | closed | Hidden states of BertForPreTraining (load from tf ckpt of google original bert) not exactly equal to the output of extract_features.py in google original bert | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: ubuntu 20.04
- Python version: 3.8
- PyTorch version (GPU?): 4.10.2
- Tensorflow version (GPU?): 2.4.0
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): @patrickvonplaten, @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj @patrickvonplaten
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten
- Tokenizers: @LysandreJik
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. load bert-base-chinese official ckpt to pytorch BertForPreTraining
2. get hidden states of -2 (or -1) layer
3. run extract_features.py to get -2 (or -1) layer hidden states of original ckpt model with tensorflow
4. the values of two sides are not exactly the same with some bias
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
a pytorch bert load from google ckpt can get the same outputs with the original tensorflow bert
<!-- A clear and concise description of what you would expect to happen. -->
| 10-31-2021 14:55:47 | 10-31-2021 14:55:47 | Hi,
Would you mind elaborating an approximate range of the bias? Also, is the hidden states output of BertForPreTraining deterministic?
If the output is deterministic but slightly different than TensorFlow's output (the differences are smaller than roughly 1e-5), this is probably a normal behavior due to different BLAS implementations dependent on platform, framework...etc.
See also: https://github.com/pytorch/pytorch/issues/9146#issuecomment-409331986<|||||>@qqaatw thank you for answering. Yes, the hidden states output of both BerForPreTraining and google-research bert are deterministic.
Sometimes the bias is small, may be 1e-3~1e-4, while using chinese_L-12_H-768_A-12, as showing below:
google-research: [0.41173, 0.086385, 0.705549, 0.224586, 0.751009, -1.071174, -0.455632, -0.390582, -0.523216, 0.520333,...]
bertforpretraining:[0.411758, 0.0876196, 0.705667, 0.224652, 0.75167, -1., -0.45543, -0.391009, -0.524803, 0.518317,...]
Sometimes the bias is big ,may be 1e-2~1e-1, while using a fine-tuned model from chinese_L-12_H-768_A-12, as showing below:
google-research: [0.000858, 0.355273, -0.711266, 0.258692, 1.342211, -0.072978, -0.238096, 0.288613, -0.121792, -0.37079, ...]
bertforpretraining:[0.017701, 0.348385, -0.742679, 0.240423, 1.337542, -0.0840113, -0.23040, 0.281977, -0.1528175, -0.3525075, ...]<|||||>@qqaatw
ps:
hidden_states from bertforpretraining were fetched like following:
config = BertConfig.from_json_file('d:/workspace/bert-google/chinese_L-12_H-768_A-12/bert_config.json')
model = BertForPreTraining.from_pretrained('d:/workspace/bert-google/chinese_L-12_H-768_A-12/bert_model.ckpt',from_tf=True,config=config)
tokenizer = BertTokenizerFast.from_pretrained('d:/workspace/bert-google/chinese_L-12_H-768_A-12/')
inputs = tokenizer('ηθ§ζη«ιΎζδΊε',return_tensors='pt')
outputs = model(**inputs,output_hidden_states=True)
print(outputs.hidden_states[-1][0,0,:10].tolist())
And features of each output layer were fetched using extract_features.py post by google-research git repo. <|||||>Hey @bengshaoye,
```
config = BertConfig.from_json_file('d:/workspace/bert-google/chinese_L-12_H-768_A-12/bert_config.json')
```
Could you change the hidden_act from `gelu` to `gelu_new` in `bert_config.json` and try again?<|||||>Thanks a lot.
gelu_new works fine for BertForPreTraining, now they look the same with gelu in original bert and gelu_new in transformers bert. |
transformers | 14,220 | closed | Model doc examples for GPT-j-6b are slightly misleading for the GPU context | # π Feature request
The default example can provide a bit more info for GPU / CUDA users.
First part I got stuck on yesterday was selecting a device:
```python
# Model init + CUDA device
device = torch.device("cuda")
model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", revision="float16", low_cpu_mem_usage=True)
model.to(device)
```
There's also tokenizer that should use the same device, otherwise a CPU gets involved and RAM use gets excessive:
```python
# Tokenizer use
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
```
## Motivation
Current documentation can provide more info for CUDA users.
## Your contribution
I would like to add a PR to the doc later, if my idea here makes sense. | 10-31-2021 13:58:59 | 10-31-2021 13:58:59 | Hi,
That's actually how any PyTorch model works, this is not really specific to GPT-J. If you want to run PyTorch models on GPU, you need to send both your model and data to the GPU. This is explained in the [PyTorch intro tutorial](https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html).
We do assume this knowledge, but if you think the documentation can be improved, feel free to open a PR.<|||||>> but if you think the documentation can be improved, feel free to open a PR.
Alright, please don't close the issue yet, I will do a PR when I have a bit of time.
I've already talked to several people who wanted to use GPT-j as an API part, locally or not, and error messages appearing aren't always pointing in the right direction.<|||||>I'd rarely ask for help - but I must have taken a wrong turn.
I enable lfs + clone 24gb / pytorch_model.bin
git clone https://huggingface.co/EleutherAI/gpt-j-6B
I read instructions -
just says - do this
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B")
```
I shifted directories around so that there's a relative folder / EleutherAI then the git gpt-j-6B repo.
I try with / without the "./" before EleutherAI
tokenizer = AutoTokenizer.from_pretrained("./EleutherAI/gpt-j-6B")
model = AutoModelForCausalLM.from_pretrained("./EleutherAI/gpt-j-6B")
but I keep keep hitting error
```shell
Traceback (most recent call last):
File "test.py", line 3, in <module>
tokenizer = AutoTokenizer.from_pretrained("./EleutherAI/gpt-j-6B")
File "/home/jp/miniconda3/envs/torch/lib/python3.8/site-packages/transformers/tokenization_auto.py", line 206, in from_pretrained
config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/home/jp/miniconda3/envs/torch/lib/python3.8/site-packages/transformers/configuration_auto.py", line 206, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
KeyError: 'gptj'
```
maybe the git didn't clone correctly?
<|||||>@johndpope you don't need to clone the model repository, the `EleutherAI/gpt-j-6B` identifier will directly fetch from the hub. Of course, cloning the repo and using a local path also works.
Your error indicates that your `transformers` version does not have the GPT-J model available - what is your version? Can you update to a more recent version and let us know if it fixes your issue? Thank you.<|||||>Thanks @LysandreJik - solved one problem - probably going to just need more RAM.
pip uninstall transformers # v 3.0
pip install transformers # 4.12
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,219 | closed | Raising exceptions instead of using assertions for few models | # What does this PR do?
- The PR addresses issue #12789 in a few transformer scripts.
- Didn't try to tinker a lot of the scripts for the same since it's my 1st PR
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger | 10-31-2021 10:02:03 | 10-31-2021 10:02:03 | |
transformers | 14,218 | closed | "Size mismatch" when using the same type of model as pretraining during finetuning while "num_labels" is diffierent | Model - Roberta
The details of this issue are as follows:
Code used:
```
pretrain model and param: RobertaForSequenceClassification with num_labels=3
pretrain model saving method: model.save_pretrained(save_path)
finetune model and param: RobertaForSequenceClassification with num_labels=2
finetune load model param method: RobertaForSequenceClassification.from_pretrained(save_path, num_labels=2)
```
What I wished is to load pretrain param with the classifier layer param randomly initialized, for the finetune process has diffierent num_labels as pretrain. But I got the following error:
`RuntimeError: Error(s) in loading state_dict for RobertaForSequenceClassification:
size mismatch for classifier.out_proj.weight: copying a param with shape torch.Size([3, 768]) from checkpoint, the shape in current model is torch.Size([2, 768]).
size mismatch for classifier.out_proj.bias: copying a param with shape torch.Size([3]) from checkpoint, the shape in current model is torch.Size([2]).`
For sure, I can fix this manually by first loading the model params in RobertaModel, then saving RobertaModel and finally using RobertaModel params to init my new RobertaForSequenceClassification, but it's somehow complicated for the same kind of problems might occure at any moment. Maybe fix this in the next version of huggingface transformers in a good choice.
Best wish
@LysandreJik
| 10-31-2021 06:01:12 | 10-31-2021 06:01:12 | We've recently added a new `ignore_mismatched_sizes` argument to the `from_pretrained` method, which can be set to `True` to ignore such runtime errors, and to be able to replace the head of an already fine-tuned model.
You can run it as follows:
```
from transformers import RobertaForSequenceClassification
model = RobertaForSequenceClassification.from_pretrained("path_to_already_finetuned_model", num_labels= 3, ignore_mismatched_sizes=True)
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,217 | closed | the results of run_glue_no_trainer.py are different from those reported in the paper | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: ubuntu 18
- Python version: 3.6
- PyTorch version (GPU?): 1.6
- Tensorflow version (GPU?):
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik @sgugger
Models:
I fine-tuned the Bert-base model on the GLUE dataset using run_glue_no_trainer.py provided by hugging face (https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification). The results are similar those in the website:

But the results reported in the paper are:

Why they are significantly different?
| 10-31-2021 05:55:32 | 10-31-2021 05:55:32 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,216 | closed | Fix generation docstring | # What does this PR do?
As mentioned in this [comment](https://github.com/huggingface/transformers/issues/14206#issuecomment-955607275), the `add_prefix_space` argument is not available in `GPT2TokenizerFast.__call__` but `GPT2Tokenizer.__call__`.
Therefore, this PR fixes this error by switching the fast tokenizer to the slow one.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten, @LysandreJik | 10-31-2021 03:46:40 | 10-31-2021 03:46:40 | Thank you! |
transformers | 14,215 | closed | Shapes mismatch triggered at modeling_flax_utils | Good day,
while using the MiniDalle repo at:
https://github.com/borisdayma/dalle-mini/issues/99
we are suddenly getting this error which was not happening before:
"Trying to load the pretrained weight for ('decoder', 'mid', 'attn_1', 'norm', 'bias') failed: checkpoint has shape (1, 1, 1, 512) which is incompatible with the model shape (512,). Using `ignore_mismatched_sizes=True` if you really want to load this checkpoint inside this model."
This is being triggered here:
https://huggingface.co/transformers/_modules/transformers/modeling_flax_utils.html
in this area:
`# Mistmatched keys contains tuples key/shape1/shape2 of weights in the checkpoint that have a shape not
# matching the weights in the model.
mismatched_keys = []
for key in state.keys():
if key in random_state and state[key].shape != random_state[key].shape:
if ignore_mismatched_sizes:
mismatched_keys.append((key, state[key].shape, random_state[key].shape))
state[key] = random_state[key]
else:
raise ValueError(
f"Trying to load the pretrained weight for {key} failed: checkpoint has shape "
f"{state[key].shape} which is incompatible with the model shape {random_state[key].shape}. "
"Using `ignore_mismatched_sizes=True` if you really want to load this checkpoint inside this "
"model."
)`
There is a way to avoid halting the execution by going into the code and adding "ignore_mismatched_sizes=True" in the call. However, this does not fix the problem. If we do that, the execution continues but the results obtained by the minidalle model are wrong all washed out and with the wrong colors and contrast (which was not happening some days ago, so something has changed that is producing this problem).
So this seems to be a bug coming from this file. Any tips are super welcome, thank you :)
| 10-30-2021 23:14:52 | 10-30-2021 23:14:52 | cc @patil-suraj <|||||>It seems to work with `flax==0.3.5`.
My guess is that weights are now being squeezed. Maybe need to reupload a new checkpoint?
Actually here a shape of (512,) seems to make more sense than (1,1,1,512)<|||||>Do we know which commit in Flax is responsible for this bug?<|||||>> It seems to work with `flax==0.3.5`. My guess is that weights are now being squeezed. Maybe need to reupload a new checkpoint? Actually here a shape of (512,) seems to make more sense than (1,1,1,512)
Could you please elaborate ?
where shall i add flax==0.3.5 ?
Thanks!<|||||>Left a comment here, https://github.com/borisdayma/dalle-mini/issues/99#issuecomment-963103973
Closing this issue, since it's not related to `transformers`. |
transformers | 14,214 | closed | Can't load tokenizer for 'facebook/hubert-base-ls960' | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.12.2
- Platform: Mac
- Python version: 3.7
- PyTorch version (GPU?): 1.9
- Tensorflow version (GPU?): No
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
I just run simple code to load Hubert pretrained base model
```
from transformers import Wav2Vec2Processor, HubertForCTC
import torch
import librosa
PROCESSOR = Wav2Vec2Processor.from_pretrained('facebook/hubert-base-ls960')
model = HubertForCTC.from_pretrained('facebook/hubert-base-ls960')
```
And i got error trace:
```
Traceback (most recent call last):
File "/Users/
PROCESSOR = Wav2Vec2Processor.from_pretrained('facebook/hubert-base-ls960')
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/transformers/models/wav2vec2/processing_wav2vec2.py", line 105, in from_pretrained
tokenizer = Wav2Vec2CTCTokenizer.from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1733, in from_pretrained
raise EnvironmentError(msg)
OSError: Can't load tokenizer for 'facebook/hubert-base-ls960'. Make sure that:
- 'facebook/hubert-base-ls960' is a correct model identifier listed on 'https://huggingface.co/models'
(make sure 'facebook/hubert-base-ls960' is not a path to a local directory with something else, in that case)
- or 'facebook/hubert-base-ls960' is the correct path to a directory containing relevant tokenizer files
``` | 10-30-2021 20:33:42 | 10-30-2021 20:33:42 | Hey @harrypotter90,
Note that `facebook/hubert-base-ls960` is just the pretrained model and therefore does not have a tokenizer yet. You can create one yourself as shown in this blog post: https://huggingface.co/blog/fine-tune-wav2vec2-english
<|||||>Thanks for the clarification @patrickvonplaten. The documentation is a bit misleading, since it has some references to loading a processor for `facebook/hubert-base-ls960`. (This also made me second-guess whether the model was fine-tuned at all.) Example: https://huggingface.co/transformers/model_doc/hubert.html#transformers.HubertForCTC
Can you update the documentation to use a different model that supports the tokenizer, such as `facebook/hubert-large-ls960-ft`?<|||||>Hey @kroq-gar78,
Ah yeah, that's great feedback! I'll change that asap. The documentation should definitely use a model that has a tokenizer!
Also I've updated the model card: https://huggingface.co/facebook/hubert-base-ls960 leaving a big note to make it clearer.
Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I'm running through the same issue as above. I followed through this [blog-post](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) but instead of saving the model to Hugging Face hub, I saved in my local directory. There is a checkpoint folder there with the preprocessor while the main directory has turkish-tokenizer_config.json file. However, when I load the model for evaluation, its still giving the tokenizer not found error. I would appreciate any help @patrickvonplaten. Thanks.

<|||||>Hey @dmatekenya,
Could you post a reproducible code snippet ? I cannot reproduce the error just from a screenshot sadly<|||||>@patrickvonplaten, greetings!
I followed the `Emotion recognition in Greek speech using Wav2Vec2.ipynb` which is in turn somehow based on your notebook. [here](https://colab.research.google.com/github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb#scrollTo=AZjDSmBRGqr6)
After finishing training on my own data, I am getting the following error when trying to load the processor with
```
processor = Wav2Vec2Processor.from_pretrained(model_name_or_path)
```
The error:
```
OSError: Can't load tokenizer for '[/path/to/model/]checkpoint-860/'. If you were trying to load it from 'https://huggingface.co/models',
Otherwise, make sure '[/path/to/model/]checkpoint-860/' is the correct path to a directory containing all relevant files for a Wav2Vec2CTCTokenizer tokenizer.
```
Checking the `checkpoint` folder, there is no tokenizer file in there, am I missing something? This is the content of the mentioned folder:

PD: the model loads correctly with `model = Wav2Vec2ForSpeechClassification.from_pretrained(model_name)`
<|||||>Hey @jvel07,
Note that emotion recognition models don't have a tokenizer, so you should load it with:
```python
from transformers import Wav2Vec2FeatureExtractor
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_name_or_path)
``` |
transformers | 14,213 | closed | fx: don't copy by reference | Here:
https://github.com/huggingface/transformers/blob/9fc1951711e5377ffa1f06614ca37d4d5ad281a8/src/transformers/utils/fx.py#L307
there's a copy by reference which is problematic because you append values to `shape`.
You should make it a copy by value, e.g. by adding the following line right after:
``
shape = shape[:]
``
Cheers, Guy :) | 10-30-2021 12:45:46 | 10-30-2021 12:45:46 | cc @michaelbenayoun <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,212 | closed | Update Seq2Seq QA example script to use SQuAD metric. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. Discussed in [this thread](https://github.com/huggingface/transformers/pull/13432#discussion_r735561720)
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger @NielsRogge
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-29-2021 17:49:40 | 10-29-2021 17:49:40 | In the extractive method, the logits are used to score the outputs but for generative there isn't any value we can define to rank it because the sentence is already generated. So I suppose we can just use one feature per example.<|||||>It looks like GitHub does not like your rebase and is showing a diff with ~200 files touched. Could you close this PR and create a fresh one?<|||||>Sorry for the late reply.
I didn't notice the error while pushing the code.
I have created the new PR here at #14335 .
I will close this one now. |
transformers | 14,211 | closed | Add a condition for checking labels | # What does this PR do?
This PR adds stability by checking whether it has labels before returning it in `prediction_step` of `Seq2SeqTrainer`
There is a similar condition in [`prediction_step` of `Trainer`](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L2460-L2465)
```python
if has_labels:
labels = nested_detach(tuple(inputs.get(name) for name in self.label_names))
if len(labels) == 1:
labels = labels[0]
else:
labels = None
```
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
## Who can review?
For `trainer` : @sgugger
| 10-29-2021 16:10:39 | 10-29-2021 16:10:39 | |
transformers | 14,210 | closed | Add `inference_mode` back to `image_segmentation` | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
This hotfix: https://github.com/huggingface/transformers/pull/14204 could be superseeded with superior fix that would keep `inference_mode` by hiding the tensor inplace modifications behind `if self.training`.
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
@stas00 @patrickvonplaten
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): @patrickvonplaten, @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj @patrickvonplaten
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten
- Tokenizers: @LysandreJik
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 10-29-2021 15:40:57 | 10-29-2021 15:40:57 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Was fixed by https://github.com/huggingface/transformers/pull/14260 |
transformers | 14,209 | closed | Fix pipeline tests env and fetch | # What does this PR do?
This PR fixes two things:
- changing `pipelines/base.py` should run all pipeline tests in the test fetcher
- the pipelines test do not install the `timm` extra dependency, and thus skip the image segmentation tests. | 10-29-2021 13:06:50 | 10-29-2021 13:06:50 | |
transformers | 14,208 | closed | [QuestionGeneration] RuntimeError: Integer division of tensors using div or / is no longer supported | ## Environment info
- `transformers` version: 4.12.0
- Platform: Linux-5.4.0-88-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.10
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): 2.6.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Models:
- T5ForConditionalGeneration : @patrickvonplaten, @patil-suraj
## Information
Model I am using T5conditionalGeneration through Questgen.ai
The problem arises when using:
* [x] my own modified scripts: (give details below)
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
def predict(sentence):
tokenizer = AutoTokenizer.from_pretrained("flexudy/t5-base-multi-sentence-doctor")
model = AutoModelWithLMHead.from_pretrained("flexudy/t5-base-multi-sentence-doctor")
input_text = f"repair_sentence: {sentence}</s>"
input_ids = tokenizer.encode(input_text, return_tensors="pt")
outputs = model.generate(input_ids, max_length=32, num_beams=1)
sentence = tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True)
return sentence
```
The tasks I am working on is:
* [x] my own task :
```python
File "apis/text/text/boolean-question-generations/questgen/questgen.py", line 12, in predict
output = qe.predict_boolq(payload)
File "/opt/conda/lib/python3.7/site-packages/Questgen/main.py", line 238, in predict_boolq
output = beam_search_decoding (input_ids, attention_masks,self.model,self.tokenizer)
File "/opt/conda/lib/python3.7/site-packages/Questgen/encoding/encoding.py", line 18, in beam_search_decoding
early_stopping=True
File "/opt/conda/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/transformers/generation_utils.py", line 1064, in generate
**model_kwargs,
File "/opt/conda/lib/python3.7/site-packages/transformers/generation_utils.py", line 1839, in beam_search
next_indices = (next_tokens / vocab_size).long()
RuntimeError: Integer division of tensors using div or / is no longer supported, and in a future release div will perform true division as in Python 3. Use true_divide or floor_divide (// in Python) instead.
```
## To reproduce
Steps to reproduce the behavior:
1.
```
pip install git+https://github.com/ramsrigouthamg/Questgen.ai
pip install git+https://github.com/boudinfl/pke.git
python -m nltk.downloader universal_tagset
python -m spacy download en
```
2.
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
def predict(sentence):
tokenizer = AutoTokenizer.from_pretrained("flexudy/t5-base-multi-sentence-doctor")
model = AutoModelWithLMHead.from_pretrained("flexudy/t5-base-multi-sentence-doctor")
input_text = f"repair_sentence: {sentence}</s>"
input_ids = tokenizer.encode(input_text, return_tensors="pt")
outputs = model.generate(input_ids, max_length=32, num_beams=1)
sentence = tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True)
return sentence
```
## Expected behavior
No error
## Fix found
https://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py#L1839
use // instead of /
| 10-29-2021 12:58:20 | 10-29-2021 12:58:20 | oups, I guess we are all colliding here
here is the related change 3months ago that removed the // operator because of torch working.
https://github.com/huggingface/transformers/pull/13013
my 2 cents is that implicit operator are kind of broken
I feel like https://github.com/StevenTang1998 push toward explicit operation https://github.com/huggingface/transformers/pull/13013#issuecomment-894198148 but not supported for torch < 1.8<|||||>@sgugger
@nreimers
@patrickvonplaten
any thoughs on that matter ?<|||||>I think we will need to do a version check and use @StevenTang1998 solution for recent versions of PyTorch and leave the old code for older versions.
Do you want to take a stab at a PR?<|||||>I could try to write it if you want.<|||||>I think I just asked you to ;-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I am still getting the error in generation_utils.py. The issue is not yet resolved in Transformers 4.15.0
<|||||>@JeetRoy97,
Could you also post your environment info?
Just run `transformes-cli env` and copy paste the output here. Especially your Python and PyTorch versions would be important to know<|||||>I run into the same division error
torch: 1.6.0
transformers: 4.15.0
onnxruntime: 1.10.0
python: 3.6
GPU
tokenizer =M2M100Tokenizer.from_pretrained( path )
tokenizer.src_lang ='xx'
example =['xxxx']
encoded = tokenizer(example, return_tensors='pt')
generated_tokens = model.generate(**encoded, forced_bos_token_id=tokenizer.get_lang_id('en'))
Error occurred at the `model.generate `step.
error message was directed to "transformers/generation_utils.py"
line 1955: next_indices = (next_tokens / vocab_size).long()
I am not entirely sure if you can reproduce because I used the onnx version of the M2M model.
This one looks like a similar problem: https://discuss.pytorch.org/t/runtimeerror-integer-division-of-tensors-using-div-or-is-no-longer-supported-and-in-a-future-release-div-will-perform-true-division-as-in-python-3-use-true-divide-or-floor-divide-in-python-instead/99427 and it was solved by changing the input type `image = image.float()` , but in the case of NLP transformer, the inputs are ids. I tried to use encoded = encoded.to(torch.long), no luck.
Any advice would be highly appreciated! <|||||>Same error when running the fine-tuning script, `run_summarization.py` with `--predict_with_generate`. The stack trace:
```
Traceback (most recent call last):
File "run_summarization.py", line 698, in <module>
main()
File "run_summarization.py", line 650, in main
predict_results = trainer.predict(
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer_seq2seq.py", line 119, in predict
return super().predict(test_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2319, in predict
output = eval_loop(
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2419, in evaluation_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer_seq2seq.py", line 172, in prediction_step
generated_tokens = self.model.generate(
File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/generation_utils.py", line 1239, in generate
return self.beam_search(
File "/usr/local/lib/python3.8/dist-packages/transformers/generation_utils.py", line 2027, in beam_search
next_indices = (next_tokens / vocab_size).long()
RuntimeError: Integer division of tensors using div or / is no longer supported, and in a future release div will perform true division as in Python 3. Use true_divide or floor_divide (// in Python) instead.
```
Environment:
```
- `transformers` version: 4.17.0.dev0
- Platform: Linux-5.4.0-90-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): 2.3.1 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
```
<|||||>Hey @mwojnars,
Thanks for reporting the error here. I'm attaching a PR that should fix it.<|||||>@patrickvonplaten Thanks. It seems to be working fine now. <|||||>The error seems to persist when running the RAG-Token model: https://huggingface.co/facebook/rag-token-nq. Running the exact code from the aforementioned link with a fresh install of the `transformers` library yields the following error:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
/tmp/ipykernel_35233/3517742745.py in <module>
1 input_dict = tokenizer.prepare_seq2seq_batch("who holds the record in 100m freestyle", return_tensors="pt")
2
----> 3 generated = model.generate(input_ids=input_dict["input_ids"])
4 print(tokenizer.batch_decode(generated, skip_special_tokens=True)[0])
5
~/anaconda3/lib/python3.8/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
13 def decorate_context(*args, **kwargs):
14 with self:
---> 15 return func(*args, **kwargs)
16 return decorate_context
17
~/anaconda3/lib/python3.8/site-packages/transformers/models/rag/modeling_rag.py in generate(self, input_ids, attention_mask, context_input_ids, context_attention_mask, doc_scores, max_length, min_length, early_stopping, use_cache, num_beams, num_beam_groups, diversity_penalty, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, repetition_penalty, bad_words_ids, num_return_sequences, decoder_start_token_id, n_docs, prefix_allowed_tokens_fn, forced_bos_token_id, forced_eos_token_id, remove_invalid_values, **model_kwargs)
1614 num_beam_hyps_to_keep=num_return_sequences,
1615 )
-> 1616 return self.beam_search(
1617 input_ids,
1618 beam_scorer,
~/anaconda3/lib/python3.8/site-packages/transformers/generation_utils.py in beam_search(self, input_ids, beam_scorer, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, **model_kwargs)
1853 )
1854
-> 1855 next_indices = (next_tokens / vocab_size).long()
1856 next_tokens = next_tokens % vocab_size
1857
RuntimeError: Integer division of tensors using div or / is no longer supported, and in a future release div will perform true division as in Python 3. Use true_divide or floor_divide (// in Python) instead.
```
and the following warnings:
```
/home/myname/anaconda3/lib/python3.8/site-packages/transformers/models/rag/tokenization_rag.py:92: FutureWarning: `prepare_seq2seq_batch` is deprecated and will be removed in version 5 of π€ Transformers. Use the regular `__call__` method to prepare your inputs and the tokenizer under the `with_target_tokenizer` context manager to prepare your targets. See the documentation of your specific tokenizer for more details
warnings.warn(
/home/myname/anaconda3/lib/python3.8/site-packages/transformers/generation_utils.py:1747: UserWarning: `max_length` is deprecated in this function, use `stopping_criteria=StoppingCriteriaList(MaxLengthCriteria(max_length=max_length))` instead.
warnings.warn(
```
I cannot rule out that this is an error specific to my installation. But as I've just installed the `transformers` library, I suspect that this might be a more general issue. |
transformers | 14,207 | closed | [GPT2] Add lm_attention_mask as a optional argument | # What does this PR do?
We provide a new optional argument in order to pass down a mask that should be applied on attention layers. As the naming conflicts with `attention_mask` (which is use to determine which ones are pad values see https://huggingface.co/transformers/v3.1.0/glossary.html#attention-mask).
Context: we want to be able to obtain a prefixlm like behaviour: Full attention mask on a prefix, and auto regressive masking on suffix.
Some changes that were unrelated to the feature introduction:
- We've also factorised code from `prepare_inputs_for_generation` into GPT2PretrainedModel as this was duplicated.
What this PR doesn't do:
- Essentially `attention_mask` can we added into `lm_attention_mask` (ie you can put 0's for all pad values and such). However we won't remove `attention_mask` for backward compatibility.
TODO:
- add tests
Example of codes:
```python
model = GPT2LMHeadModel.from_pretrained("<MODEL_NAME>")
model.generate(random_inputs) # autoregressive
lm_attention_mask = torch.ones(1, 1 , random_inputs.shape[1], random_inputs.shape[1], dtype=torch.bool, device=random_inputs.device)
model.generate(random_inputs, lm_attention_mask=lm_attention_mask) # prefixlm
lm_attention_mask = torch.triu(torch.ones(1, 1 , random_inputs.shape[1], random_inputs.shape[1], dtype=torch.bool, device=random_inputs.device))
model.generate(random_inputs, lm_attention_mask=lm_attention_mask) # fancy prefixlm
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
cc @patrickvonplaten @LysandreJik @sgugger
| 10-29-2021 12:43:42 | 10-29-2021 12:43:42 | Thanks for the comments! I don't like `causal_mask` as the whole point is to pass a non causal mask, nor `language_model_mask` because all models here are language models, and doesn't says much about this variable ...
Some suggestions:
- attention_layer_mask
- key_query_mask
- others ?<|||||>> because all models here are language models
I disagree. Sequence classification, token classification, question answering models are not language models. I understand this new argument will only be added to causal LMs and some seq2seq LMs maybe? So only language models compared to the other models in the library.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Awaiting some comments from @patrickvonplaten .<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>un-stale<|||||>This week is the week haha - sorry to be so incredibly slow with this<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,206 | closed | Does the 'bad_words_ids' argument in the "generate function" works? | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.12.0
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyTorch version (GPU?): 1.9.0+cu111 (False)
- Tensorflow version (GPU?): 2.6.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
@patrickvonplaten
-->
## Information
I attempt to evaluate whether the ```bad_words_ids``` argument that available in the ```generate()``` function works or not. However, based on the steps that I described in below section, it doesn't works.
## To reproduce
Below is the steps I used to evaluate:
1. Run the script without ```bad_words_ids``` being specified and ```set_seed``` to get deterministic output.
```
from transformers import AutoTokenizer, AutoModelForCausalLM, AutoModelForSeq2SeqLM, set_seed
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelForCausalLM.from_pretrained("gpt2")
set_seed(0)
input_context = "My cute dog"
input_ids = tokenizer(input_context, return_tensors="pt").input_ids
outputs = model.generate(input_ids=input_ids, max_length=20, do_sample=True)
print("Generated:", tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Output:
``` Generated: My cute dog, when it died, had taken my entire life to save the life that had been ```
2. Re-run the script, but with ```bad_words_ids``` being specified. I select the word **"entire"** and **"save"** taken from the previously generated sequence. However, both words still appear in the output sequence with no difference as the previous one. Below is the script with the following output.
```
from transformers import AutoTokenizer, AutoModelForCausalLM, AutoModelForSeq2SeqLM, set_seed
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelForCausalLM.from_pretrained("gpt2")
set_seed(0)
input_context = "My cute dog"
# get tokens of words that should not be generated
bad_words_ids = [tokenizer(bad_word).input_ids for bad_word in ["entire", "save"]]
# encode input context
input_ids = tokenizer(input_context, return_tensors="pt").input_ids
# generate sequences without allowing bad_words to be generated
outputs = model.generate(input_ids=input_ids, max_length=20, do_sample=True, bad_words_ids=bad_words_ids)
print("Generated:", tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Output:
``` Generated: My cute dog, when it died, had taken my entire life to save the life that had been ```
## To reproduce in Google Colab:
https://colab.research.google.com/drive/1P4ruLhFstbal1qqXbjuv-kM7yMYY-S1E?usp=sharing
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I expect the word **"entire"** and **"save"** to not be included in the output sequence after I run step (2) in above section.
<!-- A clear and concise description of what you would expect to happen. -->
| 10-29-2021 11:02:13 | 10-29-2021 11:02:13 | Hey @alvinwatner,
To prevent bad words from occurring in the middle of generated texts, you'll need to add a prefix space to every bad word so that the tokenized bad words e.g. `save` will be `['Δ save']` instead of `['save']`, which matches GPT2's outputs.
This can be done by setting `add_prefix_space=True` in the kwargs of `from_pretrained`.
```
model = AutoModelForCausalLM.from_pretrained("gpt2", return_dict_in_generate=True)
tokenizer = AutoTokenizer.from_pretrained("gpt2", add_prefix_space=True)
set_seed(0)
input_context = "My cute dog"
# get tokens of words that should not be generated
bad_words_ids = tokenizer(["entire", "save"]).input_ids
# encode input context
input_ids = tokenizer(input_context, return_tensors="pt").input_ids
# generate sequences without allowing bad_words to be generated
outputs = model.generate(input_ids=input_ids, max_length=20, do_sample=True, bad_words_ids=bad_words_ids)
print("Generated:", tokenizer.decode(outputs["sequences"][0], skip_special_tokens=True))
```
Output:
```
Generated: My cute dog, when it died, had taken my hand out of my pants and said "I
```<|||||>Thank you @qqaatw for pointing that out. Just to inform that this [example script](https://github.com/huggingface/transformers/blob/9fc1951711e5377ffa1f06614ca37d4d5ad281a8/src/transformers/generation_utils.py#L856) doesn't work and outdated. <|||||>Hi @qqaatw
Thanks in advance: I am trying to do something very similar but with T5 (either `t5-base` or `t5-large`) as the model instead of GPT2. My "bad words" are simply being ignored so it's a very similar problem. Can you advise? Am I missing some configuration that would be relevant for T5?
I am running code similar to the above but using `T5ForConditionalGeneration` with no luck. Any help appreciated!<|||||>Hi @giladpn and @qqaatw. I found a thing with this bad_words functionality and I'm not sure if this is normal behaviour or not.
> For a word that tokenized into multiple tokens, the generate function will only replace the final token while the earlier tokens still remained in the output sequence.
For e.g., the word " tester", with prefix space, tokenized into ---> ["Δ t", "ester"], with the following ids --> [256, 7834]), the output sequence will maintain the earlier tokens ("256") and only replace the final token ("7834"). Other instance, the word " traceroute" with prefix space tokenized into ---> 'Δ tr', 'acer', 'oute', with the following ids --> [491, 11736, 13192], the output sequnce will maintain the earlier tokens ("491, 11736") and only replace the final token ("13192"). <|||||>Hi @giladpn,
Can you provide a minimal but reproducible code so that I can see where the problem is?
Thanks.<|||||>Edited: Indeed, if a word is tokenized into multiple tokens, the first token will still present on the generated sequence. I'll take some time to deal with it.
~@alvinwatner, what's the input text that you supply to the model?~<|||||>Hi @qqaatw
I am trying to use T5 instead of GPT-2 in your example. Here is the code I am using, which is copy-pasted from your code example [above ](https://github.com/huggingface/transformers/issues/14206#issuecomment-955190231) with a few minimal changes:
+ changed `gpt2` to `t5-base`
+ changed `AutoModelForCausalLM` to `T5ForConditionalGeneration`
The code now generates a sentence successfully but ignores the "bad word" I put in ("dude"). The generated sentence is:
"My cute cat is the sweetest little dude in the world. My cute dog is"
Here is the code, what am I doing wrong? Thank you!
```
from transformers import AutoTokenizer, AutoModelForCausalLM, T5ForConditionalGeneration, set_seed
model = T5ForConditionalGeneration.from_pretrained("t5-base", return_dict_in_generate=True)
tokenizer = AutoTokenizer.from_pretrained("t5-base", add_prefix_space=True)
set_seed(0)
input_context = "My cute dog"
# get tokens of words that should not be generated
bad_words_ids = tokenizer(["dude"]).input_ids
# encode input context
input_ids = tokenizer(input_context, return_tensors="pt").input_ids
# generate sequences without allowing bad_words to be generated
outputs = model.generate(input_ids=input_ids, max_length=20, do_sample=True, bad_words_ids=bad_words_ids)
print("Generated:", tokenizer.decode(outputs["sequences"][0], skip_special_tokens=True))
```
<|||||>@giladpn, thanks for providing the code. Can you add `add_special_tokens=False` to `tokenizer.__call__()` and see if the problem is solved? Like so:
```
bad_words_ids = tokenizer(["dude"], add_special_tokens=False).input_ids
```<|||||>@qqaatw Yes! It works now. Many thanks! Much appreciated.<|||||>> Edited: Indeed, if a word is tokenized into multiple tokens, the first token will still present on the generated sequence. I'll take some time to deal with it.
>
> ~@alvinwatner, what's the input text that you supply to the model?~
Hi, sorry for the late reply. I have been busy working with my paper lately. And I had eventually created my own script, not optimized well enough, but seems able to deal with those issues.
- Here, generation_banned_words.py if you want to take a look [link](https://github.com/alvinwatner/transformers_banned_words/blob/master/src/transformers/generation_banned_words.py).
- Unfortunately, I only managed to attach it to greedy_search due to time constraint. Here is how it looks like [link](https://github.com/alvinwatner/transformers_banned_words/blob/27e3653e6f199328df3eccd6111971d7e0fd53d9/src/transformers/generation_utils.py#L1342). Also, since my script only requires 'input_ids' and 'next_tokens' (that exist in every sampling method) and the 'sorted_next_token_indices (that is just the topk from the next_tokens_scores), I assume that it should not be too difficult to embed [this](url) to other sampling methods. Why we need 'sorted_next_token_indices'? I could explain further, but in short, at every timestep, if the chosen token (argmax initially) satisfied the banned_words ids, it will be replaced by other token that has the next highest probs after the chosen token (for e.g., sorted_next_token_indices = [5, 9, ..., vocab_size], banned_words_ids = [5]. Then, we chose the next highest after 5, which is 9).
- Here is a glimpse of usage I made in colab [link](https://colab.research.google.com/drive/1oXRwTdF-DWg9qZpxXN8LwCWgmil8ZrTE?usp=sharing)
ps : sorry for the sphagetty code :( <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>It seems like this function did not work for Chinese bart, but Chinese bart use bert tokenizer not bart tokenizer, don't know if this affect? Anyone knows how to make it work in Chinese bart? Thank you<|||||>Interesting, I just figure it out. For Chinese bart, you only need the one token id to make it work out, because there is no suffix in Chinese character, so if you use tokenizer to get bad word ids, it will return something like [[101, 704, 102]], but the 101 and 102 represent [CLS] and [SEP], you only need 704 id. |
transformers | 14,205 | closed | Add option to not load pretrained weights in `AutoModel.from_pretrained()` | # π Feature request
Add option to not load pretrained weights in `AutoModel.from_pretrained()`
analogous to `pretrained=` in `timm.create_model(model_name, pretrained=True/False)`
## Motivation
code like this can be simplified
https://github.com/huggingface/transformers/blob/4469010c1be3a94f74d6448497051468f617baf2/examples/pytorch/question-answering/run_qa_no_trainer.py#L357-L365
## Your contribution
I can help to add this, if people think this is useful.
What should the argument name be? `load_weights: bool = True`? | 10-29-2021 09:16:37 | 10-29-2021 09:16:37 | It's just `AutoModel.from_config(AutoConfig.from_pretrained("model_name))` no?<|||||>Yes. But it requires if-else, and I don't like that. π<|||||>I personally think the API with `from_pretrained` and `from_config` is nice and quite flexible. Adding a second way of doing the same thing looks like it would complicate things :)<|||||>That's a good point. Thank you. :) |
transformers | 14,204 | closed | Fixing image segmentation with inference mode. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
It seems `detr` models are modifying tensors inplace, which is not allowed
by `inference_mode` context manager, effectively breaking `image-segmentation` for
`torch > 1.9` users.
This PR proposes to override the context manager **only** in `image-segmentation` to `no_grad`
so the pipeline works again, without checking underlying model. Using a method to do the
switch.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@mishig25 @LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 10-29-2021 08:38:57 | 10-29-2021 08:38:57 | Fine with this solution, but a better one might actually to disable all those:
```python
if torch.isinf(hidden_states.clone()).any() or torch.isnan(hidden_states.clone()).any():
clamp_value = torch.finfo(hidden_states.dtype).max - 1000
hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value)
```
for inference to make inference mode possible. Those "value" clamping checks are IMO only really useful to allow fp16 training (not so much for fp16 inference) and thus we could disable them with an `if self.training` statement.
@patil-suraj and I have somewhat unsuccesfully added lots of those statements to T5 in the hope of enabling fp16 training, which turned out to not work very well...with bfloat16 being available more for PyTorch those statements might not serve a good purpose anymore anyways.
=> So I would be fine with hiding them behind a `self.training`.
Pinging @stas00 here as well to hear his opinion :-)<|||||>Agreed with @patrickvonplaten it would be better if we could keep ` inference_mode` and work we the other fix.<|||||>This fix is fine for a quick patch (patch release is going to be made in the next couple of hours), and we can make a better one afterward. So I would merge this now :-) |
transformers | 14,203 | closed | Intel OpenVINO backend (inference only) | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
* This PR adds an optional [OpenVINO](https://github.com/openvinotoolkit/openvino) backend which allows to run deep learning inference (not training) on Intel Hardware (CPUs, GPUs, VPUs and other).
* OpenVINO can load models in ONNX or OpenVINO IR formats. Using one of the following classes, conversion is done in runtime (with `from_pt` or `from_tf` flags, see the tests for details).
```
OVAutoModel
OVAutoModelForMaskedLM
OVAutoModelForQuestionAnswering
OVAutoModelWithLMHead
OVAutoModelForSequenceClassification
```
* Users might upload IR format directly to the hub. In example, https://huggingface.co/dkurt/bert-large-uncased-whole-word-masking-squad-int8-0001
forum: https://discuss.huggingface.co/t/intel-openvino-backend/11178
resolves https://github.com/huggingface/transformers/issues/13987
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-29-2021 06:54:07 | 10-29-2021 06:54:07 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hello @dkurt, as you have seen with @mfuntowicz and @echarlaix, and as you have opened a PR in optimum already, I'll go ahead and close this. It will still be visible and the code will still be accessible, in case reopening it in the future makes sense.
Thank you for your contribution! |
transformers | 14,202 | closed | Fix the write problem in trainer.py comment | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes a typos and writing issue in the comments of trainer.py.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-29-2021 06:13:25 | 10-29-2021 06:13:25 | |
transformers | 14,201 | closed | Couldn't reproduce DistilBERT downstream tasks performance on SQuAD dataset. | Hi,
I tried the DIstilBERT finetuning (script path: transformers/examples/tensorflow/question-answering) for QuestionAnswering downstream task without changing any code, and this is my command:
`
python3 run_qa.py --model_name_or_path distilbert-base-uncased --output_dir output --dataset_name squad --do_train --do_eval
`
And finally, I got the following performance: SQuAD(EM/F1): 72.441/81.188
But the performance was shown on the paper: SQuAD(EM/F1): 79.1/86.9
Is there anything wrong with my command?
transformers version: 4.11.3 | 10-29-2021 01:45:10 | 10-29-2021 01:45:10 | The `run_qa.py` script is very general, to be used by any `xxxForQuestionAnswering` model. To reproduce the results of a specific paper, you may need to tweak the hyperparameters, such as number of epochs, learning rate, batch size.<|||||>I see, thanks! @NielsRogge
May I get the recommended hyperparameters to reproduce the performance of SQuAD(EM/F1): 79.1/86.9.<|||||>The DistilBERT author (@VictorSanh) used the (now legacy) `run_squad.py` script which can be found [here](https://github.com/huggingface/transformers/tree/master/examples/legacy/question-answering). So you might take a look at the hyperparameters used there. The following command is still included in the README of the question-answering examples:
```
python run_squad.py \
--model_type distilbert \
--model_name_or_path distilbert-base-uncased \
--do_train \
--do_eval \
--do_lower_case \
--train_file $SQUAD_DIR/train-v1.1.json \
--predict_file $SQUAD_DIR/dev-v1.1.json \
--per_gpu_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/debug_squad/
```
So might try this out. The checkpoint is also available on the hub: https://huggingface.co/distilbert-base-cased-distilled-squad<|||||>@NielsRogge Thanks so much!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,200 | closed | Trainer batch size auto scaling | # π Feature request
Since `Trainer` handles both batch_size and gradient_accumulation_steps it seems like it could detect some out-of-memory situations and handle those scenarios automatically.
## Motivation
I've been experimenting with model search (model_type, vocab_size, num_hidden_layers, hidden_size) and it's been somewhat difficult to manage the correct batch size for each variant. To avoid a process of trial & error and maintaining configuration tables, what I've been doing to overcome this is detecting memory exhaustion and adapting training arguments on the fly. It's imperfect, but I wonder if there's an official way to achieve this kind of behavior.
## Your contribution
This is just a PoC, I'm sure there are several environments where this might be problematic. In particular CPU training on Linux is quite likely to trigger the OOM killer where the entire process is simply wiped from memory. Nevertheless, this strategy seems helpful at least some of the time.
```python
class BatchAutoScaleTrainer(transformers.Trainer):
''' Try to detect application crashes due to CUDA/CPU OOMs and
rescale batch size. An antiprime batch_size gives best results.
Inspired by PyTorchLightning/pytorch-lightning#1638
'''
def _shrink_bs(self):
# GAS is used by both .train() and .eval() and we need to find a
# suitable setting for both
tbs = self.args.per_device_train_batch_size
ebs = self.args.per_device_eval_batch_size
gas = self.args.gradient_accumulation_steps
for i in range(gas + 1, min(tbs, ebs) + 1):
if tbs % i or ebs % i:
continue
self.args.per_device_train_batch_size = (tbs * gas) // i
self.args.per_device_eval_batch_size = (ebs * gas) // i
self.args.gradient_accumulation_steps = i
return True
return False
def _is_oom(self, err):
# shamelessly stolen from https://github.com/PyTorchLightning/pytorch-lightning/pull/1638/files#diff-5200c11792b86d6a07ea64820e126897aa2e3b7d3d295c92c19b141de6950afeR29-R32
return len(err.args) == 1 and (
"CUDA out of memory." in err.args[0]
or "cuDNN error: CUDNN_STATUS_NOT_SUPPORTED." in err.args[0]
or "DefaultCPUAllocator: can't allocate memory" in err.args[0]
or "CUDA error: CUBLAS_STATUS_ALLOC_FAILED " in err.args[0]
)
def _auto_scale_batch_size(self, code):
while True:
try:
gc.collect()
if torch.cuda.is_available():
torch.cuda.empty_cache()
return code()
except RuntimeError as err:
if self._is_oom(err) and self._shrink_bs():
continue
raise
assert(False) # bug in _shrink_bs() most likely
def train(self, *args, **kwds):
train = super().train
return self._auto_scale_batch_size(
lambda: train(*args, **kwds))
def evaluate(self, *args, **kwds):
evaluate = super().evaluate
return self._auto_scale_batch_size(
lambda: evaluate(*args, **kwds))
```
Any chance something like this might be integrated with the Trainer? | 10-28-2021 18:56:49 | 10-28-2021 18:56:49 | I am very nervous about adding that kind of feature of auto scaling to the Trainer. Note that the `_is_oom` test for instance will catch way more CUDA errors than the OOM: haivng the wrong number of labels in your model will trigger an error with `CUBLAS_STATUS_ALLOC_FAILED` on most environments.
In a notebook, the kernel is in an unrecoverable state after the `try`/`except` (and `torch.cuda.empty_cache()` does not help), so this wouldn't work either.
So for now, my sense is that such a feature would be more painful for the user than beneficial and I would leave the tuning of the batch size to the user.
<|||||>Thanks very much for the feedback!<|||||>Perhaps instead of shrinking batch_size, it could work the other direction. If gradient_accumulation_steps is > 1, the first few steps could monitor the memory footprint and combine steps when the system sees there is enough capacity. Is that similarly dangerous?
Again, a very rough PoC just to illustrate the behavior:
```python
class BatchAutoScaleTrainer(transformers.Trainer):
def __init__(self, *args, **kwds):
self._should_study = True
super().__init__(*args, **kwds)
self.mini_bs = self.args.n_gpu
def _minibatch(self, bs, mbs, batch):
kl = batch.keys()
return (dict(zip(kl,
(_[i:i+mbs] for _ in batch.values()),
)) for i in range(0, bs, mbs))
def _ministudy(self, bs, mbs):
if mbs < bs:
for i in range(mbs + 1, bs + 1):
if bs % i == 0:
nbs = i
break
d = torch.device
est = torch.cuda.memory_reserved(d) / mbs * nbs
if est < torch.cuda.get_device_properties(d).total_memory:
self.mini_bs = nbs
return
transformers.trainer.logger.info(f'{__class__.__name__}: mini_bs={nbs}')
self._should_study = False
def training_step(self, model, inputs):
bs = len(next(iter(inputs.values())))
mbs = min(bs, self.mini_bs)
segs = bs // mbs
ts = super().training_step
loss = torch.stack(tuple(
ts(model, batch)
for batch in self._minibatch(bs, mbs, inputs)
)).mean()
if self._should_study:
self._ministudy(bs, mbs)
return loss
def prediction_step(self, model, inputs, prediction_loss_only, ignore_keys):
bs = len(next(iter(inputs.values())))
mbs = self.mini_bs
segs = bs // mbs
ps = super().prediction_step
loss, logits, labels = zip(*(
ps(model, batch, prediction_loss_only, ignore_keys)
for batch in self._minibatch(bs, mbs, inputs)
))
if prediction_loss_only:
return (torch.stack(loss).mean(), None, None)
return (torch.stack(loss).mean(), torch.cat(logits), torch.cat(labels))
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Maybe of interest to you @tlby https://github.com/rentruewang/koila<|||||>@LysandreJik Indeed, thanks for the note. rentruewang/koila#12 is a hopeful sign. |
transformers | 14,199 | closed | run_t5_mlm_flax.py | Hi! @patil-suraj which arg to use for run_t5_mlm_flax.py to run on multiple gpus? | 10-28-2021 18:56:36 | 10-28-2021 18:56:36 | I haven't tested it, but it should run as it is on single-host multi-gpu.
You just need to install jax version compatible with your cuda installation, for which you can find the instructions [here](https://github.com/google/jax#pip-installation-gpu-cuda)<|||||>Using 't5-small' tokenizer is wrong we should use just the pretrained tokenizer .
I have tried it and got very low accuraccy.
using wikitext /wikitext-103-raw-v1 for 10 epochs as dataset and pretraining the tokenizer over wikitext-103-raw-v1 got 0.5838 accuracy. but using t5-small tokenizer very low accuracy. and no masking is happening just deleting tokens.
<|||||>@patil-suraj this is [my try ](https://drive.google.com/file/d/1cjjlcy1-mr8gHji4wy0hMg2ZwX10fDqJ/view?usp=sharing)
to run the same steps using pytorch.
I have tried to use t5-small tokenizer. Also, I trained the given tokenizer in this repo on wikitext to compare.
The results are not the same, seems strange. Training on 10 epochs using :
1. **if tokenizer trained on wiki**
export CUDA_VISIBLE_DEVICES=0,1,2,3; python3 run_t5_mlm_flax.py --output_dir="./ MLM-128wiki/wikitokenizerβ --model_type="t5" --config_name="./wikitext-103-raw-v1" --tokenizer_name="./wikitext-103-raw-v1" --dataset_name="wikitext" --dataset_config_name="wikitext-103-raw-v1" --max_seq_length="128" --per_device_train_batch_size="32" --per_device_eval_batch_size="32" --adafactor --learning_rate="0.005" --weight_decay="0.001" --warmup_steps="2000" --overwrite_output_dir --logging_steps="500" --save_steps="10000" --eval_steps="500" --num_train_epochs=10
2. **if tokenizer is t5 -small tokenizer**
export CUDA_VISIBLE_DEVICES=0,1,2,3; python3 run_t5_mlm_flax.py --output_dir="./ MLM-128wiki/t5-tokenizerβ --model_type="t5" --config_name="./wikitext-103-raw-v1" --tokenizer_name="t5-small" --dataset_name="wikitext" --dataset_config_name="wikitext-103-raw-v1" --max_seq_length="128" --per_device_train_batch_size="32" --per_device_eval_batch_size="32" --adafactor --learning_rate="0.005" --weight_decay="0.001" --warmup_steps="2000" --overwrite_output_dir --logging_steps="500" --save_steps="10000" --eval_steps="500" --num_train_epochs=10
**results**
> T5tokenizer tokenizer trained on wiki
> train loss: 2.307 ------ 2.074
> eval loss: 2.254 ------ 1.959
using my code as following:
**1. if tokenizer trained on wiki:**
export CUDA_VISIBLE_DEVICES=0,1,2,3; python3 rum_mlm_torch.py --output_dir="./torch/wiki" --model_type="t5" --config_name="./wikitext-103-raw-v1" --tokenizer_name="./wikitext-103-raw-v1" --dataset_name="wikitext" --dataset_config_name="wikitext-103-raw-v1" --max_seq_length="128" --per_device_train_batch_size="32" --per_device_eval_batch_size="32" --adafactor --learning_rate="0.005" --weight_decay="0.001" --warmup_steps="2000" --logging_steps="500" --save_steps="10000" --eval_steps="1000" --do_train --do_eval --do_predict --overwrite_output_dir --report_to='wandb' --num_train_epochs=10 --evaluation_strategy steps
**2. if tokenizer is t5 tokenizer:**
export CUDA_VISIBLE_DEVICES=0,1,2,3; python3 rum_mlm_torch.py --output_dir="./torch/t5tokenizer" --model_type="t5" --config_name="./wikitext-103-raw-v1" --tokenizer_name="t5-small" --dataset_name="wikitext" --dataset_config_name="wikitext-103-raw-v1" --max_seq_length="128" --per_device_train_batch_size="32" --per_device_eval_batch_size="32" --adafactor --learning_rate="0.005" --weight_decay="0.001" --warmup_steps="2000" --logging_steps="500" --save_steps="10000" --eval_steps="1000" --do_train --do_eval --do_predict --overwrite_output_dir --report_to='wandb' --num_train_epochs=10 --evaluation_strategy steps
**results:**
> T5tokenizer tokenizer trained on wiki
> train loss: 4.675 ------ 3.961
> eval loss: 4.562 ------ 3.8
>
@patil-suraj @patrickvonplaten, Any explanation whyusing flax giving much more better results using torch?
<|||||>@Arij-Aladel - could you specify your question here a bit? What exactly is the issue?<|||||>@patrickvonplaten I need to train T5 from hugging face from scratch on mlm task using pytorch. To my knowledge, there is no example on your repo to do that. The **_main issue_** that the same dataset preprocessing using the same T5 model but with two different frameworks flax and pytorch gave me different results. I did not change anything in the original run_mlm_flax.py code I just tried to use pytorch and Trainer instead. Everything is still as in the original code so why I am getting different results? I need torch version cause I have already built my model based on T5 from huggingface and I need also to train my model on mlm task and compare it with T5 from hugging face. That is why I started with T5 first as a baseline.
I have decided as the first step to use [wikitext-103-raw-v1 ](https://huggingface.co/datasets/wikitext#wikitext-103-raw-v1) dataset for pretraining.
The first question was in my mind which tokenizer to use so I have tried t5-small tokenizer to pretrain using the [original script,](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/run_t5_mlm_flax.py) then I trained the [tokenizer](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/t5_tokenizer_model.py) on train split of wikitext-103-raw-v1 dataset .
1. **First issue was using the pretrained tokenizer on wikitext-103-raw-v1 dataset gave me better results** and this raise another question in my mind , If I need to pretrain the model on mlm task then finetune it on another task, which tokenizer to use? I mean do I need to pretrain the tokenizer again and again evry time I will use new dataset? or simply uset 5-small tokenizer everywhere? or decide which datasets will be used in my experiements train the tokenizer on all train splits then do the pretraining and funetuning?
2. Second Issue : trying to mimic run_mlm_flax.py using torch Trainer keeping the dataset preprocessing and collator class with no change resulted in unsatisfied results even I tried to train on 100 epochs, still using 10 epochs with original script gives better results. Can you please guid me to the reason? I do not need the flax version I need torch pipeline to train T5 on mlm task from scratch. Seems my try was not good<|||||>Hey @Arij-Aladel,
We currently don't have any support to pre-train T5 from scratch in PyTorch. We only have a script in Flax and we recommend https://github.com/google-research/text-to-text-transfer-transformer for training in TF.
Could you maybe instead try whether you can find support for pretraining in PyTorch on the forum: https://discuss.huggingface.co/ ?
Thanks!<|||||>@patrickvonplaten I know that that is why I tried it myself and the performance using pytorch version is not satisfied even the masking pipeline is the same I have tracked it. That is why I asked why is the performance of flax T5 different of pytorch T5<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@patrickvonplaten When i log `jax.local_device_count()` and `jax.device_count()` when running the run_t5_mlm_flax.py script it returns 1 even though i'm training with multiple GPUs on a single host. Any ideas how I can fix?<|||||>Hey @ToluClassics - this issue seems to be related to JAX rather than the Transformers library - could you try to open an issue there? :-)<|||||>> @patrickvonplaten I need to train T5 from hugging face from scratch on mlm task using pytorch. To my knowledge, there is no example on your repo to do that. The **_main issue_** that the same dataset preprocessing using the same T5 model but with two different frameworks flax and pytorch gave me different results. I did not change anything in the original run_mlm_flax.py code I just tried to use pytorch and Trainer instead. Everything is still as in the original code so why I am getting different results? I need torch version cause I have already built my model based on T5 from huggingface and I need also to train my model on mlm task and compare it with T5 from hugging face. That is why I started with T5 first as a baseline. I have decided as the first step to use [wikitext-103-raw-v1 ](https://huggingface.co/datasets/wikitext#wikitext-103-raw-v1) dataset for pretraining. The first question was in my mind which tokenizer to use so I have tried t5-small tokenizer to pretrain using the [original script,](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/run_t5_mlm_flax.py) then I trained the [tokenizer](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/t5_tokenizer_model.py) on train split of wikitext-103-raw-v1 dataset .
>
> 1. **First issue was using the pretrained tokenizer on wikitext-103-raw-v1 dataset gave me better results** and this raise another question in my mind , If I need to pretrain the model on mlm task then finetune it on another task, which tokenizer to use? I mean do I need to pretrain the tokenizer again and again evry time I will use new dataset? or simply uset 5-small tokenizer everywhere? or decide which datasets will be used in my experiements train the tokenizer on all train splits then do the pretraining and funetuning?
> 2. Second Issue : trying to mimic run_mlm_flax.py using torch Trainer keeping the dataset preprocessing and collator class with no change resulted in unsatisfied results even I tried to train on 100 epochs, still using 10 epochs with original script gives better results. Can you please guid me to the reason? I do not need the flax version I need torch pipeline to train T5 on mlm task from scratch. Seems my try was not good
@Arij-Aladel Hi, do you have any updates on this issue? I'm also trying to pre-train T5 from scratch in PyTorch, can you share your scripts of `run_t5_mlm_flax.py`? I just converted the parameters trained from flax to the pytorch:
```
model = FlaxT5ForConditionalGeneration.from_pretrained(pretrained_path)
pt_model = T5ForConditionalGeneration.from_pretrained(tmp_path, from_flax=True)
```
but it seems doesn't work.
And for your first issue, I think we need to retrain the tokenizer every time when we use new datasets.<|||||>@Eurus-Holmes Hi! yes of course you can find an example here https://github.com/Arij-Aladel/T5-Tasks |
transformers | 14,198 | closed | use functional interface for softmax in attention | There are several instances of (ab)using the PyTorch modular interface to compute softmax where it would be more natural to use the functional interface. This patch changes the occurrences I found.
| 10-28-2021 18:28:04 | 10-28-2021 18:28:04 | Probably it's an attention thing. :) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Closing this as stale.<|||||>Out of curiosity, why didn't you want to fix the last two instances and closed your PR instead?<|||||>@sgugger To be honest, I'd have preferred to not comment on this further. But as you asked:
So from my side, this is what happened
- I use transformers as a reference for some custom implementation (i.e. I don't plan on using transformers itself) and notice a dubious code pattern of instantiating Softmax and immediately using it.
- Out of courtesy, I submit a PR. To make the review worthwhile, I look for other instances of the identical pattern and fix these, too.
At this point, I submitted a patch that looked "very low risk (because I never say obviously correct), easy to review" to me.
It probably doesn't fix all dubious patterns in transformers, but it takes out 28 instances spread across as many files.
- The patch is promptly approved by you and stas. Thank you!
Then instead of merging:
- You point to two other directories that you say contain more of it (I still don't know what you mean, there are uses of the modular Softmax, but in a perfectly OK way to me, I didn't want to audit your entire codebase for dubious patterns).
- Nothing happens for a month.
- Now your bot says the PR is stale and places the burden of moving it along on me.
I agree with the bot that the PR isn't moving with the expected speed and but I don't want to spend more time with it. I thought that searching for the same patterns across your codebase and fixing the other 27 places would have been a reasonable trade-off between "don't submit 28 identical one-liners" and "don't make it more complicated than it needs to be", obviously, you did not agree. That is OK, but I don't want to do the extra work that would be needed to make this patch acceptable to you.
So you work at scale and process hundreds of patches any given day and I am not saying your process isn't adequate. It is just not for me.
<|||||>There seems to have been a misunderstanding here and I just wish you had either told us that the remaining instances you had found were fine in your book, or that you didn't want to work further on this PR. It was just a suggestion on my side and I never said the PR would not be merged if you didn't include those last two instances. I apologize if my comment upset you.
If you want to reopen your PR, we'll be happy to merge it.<|||||>No worries, there is nothing wrong with it nor with your comment, it's just that I don't want to change this patch anymore. (And maybe I wasn't in the mood for your bot comment, but hey.)
If it's still useful, I'm happy to have it merged, if not, it's OK too.<|||||>The bot is there to remind us when we forget PRs for a long time like this one, sorry about it :-)
Thanks again for your contribution!<|||||>Thank you! You're awesome! |
transformers | 14,197 | closed | Fix EncoderDecoderModel docs | # What does this PR do?
Updates the docs of the EncoderDecoderModel classes, as a follow-up of #14139. | 10-28-2021 15:39:47 | 10-28-2021 15:39:47 | |
transformers | 14,196 | closed | [T5v1_1] Add lm model pretrained models as well | Reminder to @patrickvonplaten to add all [LM-Adapted: t5.1.1.lm100k checkpoitns](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k) to the hub. | 10-28-2021 15:32:00 | 10-28-2021 15:32:00 | FYI @TevenLeScao @craffel @VictorSanh <|||||>https://huggingface.co/models?other=t5-lm-adapt |
transformers | 14,195 | closed | Help training TrOCR | Hello,
Thank you for your hard work on the repo.
I want to train the newly added TrOCR for my use case on my dataset.
I am a little confused on how to proceed.
https://huggingface.co/transformers/training.html
The finetuning tutorial seems to be geared towards strictly NLP models.
Am I missing something? Is there some documentation somewhere related to how to train
the TrOCR with images and text?
Any pointers or help would be greatly appreciated.
Thank you | 10-28-2021 14:58:10 | 10-28-2021 14:58:10 | You're in luck, I just uploaded a notebook showcasing how to fine-tune TrOCR on a custom dataset.
You can find it here: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TrOCR/Fine_tune_TrOCR_on_IAM_Handwriting_Database_using_Seq2SeqTrainer.ipynb
Regarding documentation, this can be found in the docs of `VisionEncoderDecoderModel` [here](https://huggingface.co/transformers/master/model_doc/visionencoderdecoder.html) - soon available in the new v4.12 release.<|||||>In addition to that, it would be great to have a tutorial without transformers trainer for same task but with the `VisionEncoderDecoderModel ` :hugs: <|||||>Do you mean using regular PyTorch?<|||||>yes or lightning<|||||>and if it possible also a way to export this model into ONNX (if it is possible - i know a bit trouble with .generate() /greedy search)<|||||>Sure, I can do that. It would be very similar, except that for computing metrics during evaluation one needs to use the `generate()` method instead of just doing a forward pass and argmaxing the logits.<|||||>@NielsRogge
i have also created a [Colab Notebook](https://colab.research.google.com/drive/1PZz8oGH3pZEjDpMJex2nZ5p8MiYXzYA8?usp=sharing) with my try but fail at the val & predict step<|||||>this would be great :hugs: what do you think about the ONNX part ? Can you show some example also in a tutorial ? I think this would be awesome and very useful also for EncoderDecoderModel and the speech counterpart
So i will keep a eye on your tutorials repo much thanks for this<|||||>@NielsRogge Thank you for the very, very quick response. I will start looking into the notebook provided.
<|||||>@felixdittrich92 fine-tuning TrOCR using native PyTorch notebook is here: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TrOCR/Fine_tune_TrOCR_on_IAM_Handwriting_Database_using_native_PyTorch.ipynb<|||||>@NielsRogge nice thank you :)
but overall i do the same with VisonEncoderDecoderModel in val and it ooms (in val on batch_size=2) instantly (T4 16GB) any idea ?
I have also a very similar model (ViTSTR) where i have no problems to fit a batch_size of 64 (same data)
```
def validation_step(self, batch, batch_idx):
image_tensors = batch['pixel_values']
labels = batch['input_ids']
plain_labels = self.tokenizer.batch_decode(labels, skip_special_tokens=True)
loss, logits = self(pixel_values=image_tensors, labels=labels)
preds = logits.argmax(-1) # [batch_size, seq_len]
prob = torch.softmax(logits, -1).max().item()
generated_ids = self.vit_bert_model.generate(image_tensors).detach()
plain_preds = self.tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
# compute word error rate
word_error_rate = wer(predictions=plain_preds, references=plain_labels)
# compute character error rate
char_error_rate = cer(predictions=plain_preds, references=plain_labels)
self.log("val_loss", loss.detach(), prog_bar=True, on_step=False, on_epoch=True, sync_dist=self.sync_dist)
self.log("val_acc", accuracy(preds, labels).detach(), prog_bar=True, on_step=False, on_epoch=True, sync_dist=self.sync_dist)
self.log("word_error", torch.tensor(word_error_rate).detach(), prog_bar=True, on_step=False, on_epoch=True, sync_dist=self.sync_dist)
self.log("char_error", torch.tensor(char_error_rate).detach(), prog_bar=True, on_step=False, on_epoch=True, sync_dist=self.sync_dist)
return loss
```
```
self.vit_bert_model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained(vit_backbone, 'bert-base-multilingual-cased', return_dict=True)
self.vit_bert_model.config.decoder.decoder_start_token_id = self.tokenizer.cls_token_id # 101
self.vit_bert_model.config.decoder.eos_token_id = self.tokenizer.sep_token_id # 102
self.vit_bert_model.config.decoder.pad_token_id = self.tokenizer.pad_token_id # 0
```<|||||>You can wrap everything in a `with torch.no_grad()` to save memory.<|||||>@NielsRogge
Thats lightning if I'm not totally wrong, this will be done automatically in the val step :)
So i have no reason for this i pad the labels up to a size of 128
I use currently deit-base-distilled weights for the Encoder and bert-base-multilingual-cased as Decoder<|||||>Hello @NielsRogge
Reopening this issue with a question.
Thank you for the notebook. It is very helpful.
I am trying to finetune the TrOCR to be able to recognize numbers of up to maximum 9 characters with and without a decimal point.
I have 2 questions:
1. Do I need to train a custom tokenizer or is the pretrained Roberta one included with TrOCR good enough?
2. Do you have any recommendations regarding model parameters that would be better suited for my use case?
Thing like number of beams for beam search, max length, vocab size, etc?
Thank you<|||||>Thank you so much for your hard work!
I'm starting work with NER and OCR. You helped me a lot.
To use this model in the Portuguese language, I need to train with a text base in Portuguese, right?
Do you know any tools to help me with the creation of this base?
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>> Do I need to train a custom tokenizer or is the pretrained Roberta one included with TrOCR good enough?
You don't need to train a custom tokenizer, the one of RoBERTa can be used.
> Do you have any recommendations regarding model parameters that would be better suited for my use case?
Thing like number of beams for beam search, max length, vocab size, etc?
Not really, the authors used beam search with num_beams=5, other than that, all default parameters of the [generate](https://huggingface.co/docs/transformers/master/en/main_classes/model#transformers.generation_utils.GenerationMixin.generate) method were used.
> To use this model in the Portuguese language, I need to train with a text base in Portuguese, right?
Yes, the TrOCR model that Microsoft released was only trained on English image/text pairs, however they are planning to release a multilingual variant. The problem is that RoBERTa's tokenizer only includes tokens for the English language, so one would need to train the TrOCR model from scratch, starting from a multilingual (or Portuguese) text Transformer as decoder.<|||||>Thank you very much for the information!
I googled and found BERTibaum, link:
https://github.com/neuralmind-ai/portuguese-bert
This model is pre-trained in Portuguese.
Would it be possible to use this model in the decoder?
Em seg., 6 de dez. de 2021 Γ s 12:20, NielsRogge ***@***.***>
escreveu:
> Do I need to train a custom tokenizer or is the pretrained Roberta one
> included with TrOCR good enough?
> You don't need to train a custom tokenizer, the one of RoBERTa can be used.
>
> Do you have any recommendations regarding model parameters that would be
> better suited for my use case?
> Thing like number of beams for beam search, max length, vocab size, etc?
> Not really, the authors used beam search with num_beams=5, other than
> that, all default parameters of the generate
> <https://huggingface.co/docs/transformers/master/en/main_classes/model#transformers.generation_utils.GenerationMixin.generate>
> method were used.
>
> To use this model in the Portuguese language, I need to train with a text
> base in Portuguese, right?
> Yes, the TrOCR model that Microsoft released was only trained on English
> image/text pairs, however they are planning to release a multilingual
> variant. The problem is that RoBERTa's tokenizer only includes tokens for
> the English language, so one would need to train the TrOCR model from
> scratch, starting from a multilingual (or Portuguese) text Transformer as
> decoder.
>
> β
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/14195#issuecomment-986876156>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AJHYQQ3CI4MNGCA2GKPCK43UPTIFDANCNFSM5G5DPWFQ>
> .
> Triage notifications on the go with GitHub Mobile for iOS
> <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
> or Android
> <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
>
>
--
*Celso AntΓ΄nio M Lopes JΓΊnior*
Computer Engineering - computational Intelligence
Researcher of Artificial Neural Networks, Deep Learning, and Computer Vision
+55 81 99246-9364 <+55%2081%2099246-9364>
***@***.***
_______________________________________________________
<|||||>Hi,
Sorry for the late reply, but yes you can combine a vision encoder (like `google/vit-base-patch16-in21k`) with a language decoder in a different language (like `neuralmind/bert-base-portuguese-cased`), assuming you have enough data to fine-tune the model on image -portuguese text pairs.<|||||>Hello.
Thank you very much for sharing code and models. It is really great and useful.
I am wondering whether I can ask a relevant question from here.
I am trying to train a new VisionEncoderDecoder model for new language (Bahasa-Indonesian).
The initial performance is pretty bad (72% CER). I am wondering whether I can get some advices.
Based on my understanding, Bahasa-Indonesian uses Alphabet characters (without special characters such as German Epsilon). So my initial model is as follows:
Encoder: Pretrained ViT model (βgoogle/vit-base-patch16-224β)
Decoder: Pretrained Indonesian Roberta Language model ("'cahya/roberta-base-indonesian-1.5G")
For updating decoder weights using Pertained Roberta model, I have a few questions.
Currently, the parameter names from Roberta models are different from Decoder model parameters, so we need some mapping process. I did the following steps, and i am wondering whether there are some errors.
encoder
encoder = ViTModel.from_pretrained(βgoogle/vit-base-patch16-224β)
decoder
lmconfig = TrOCRConfig(vocab_size=indonesian_lm_vocab_size, β¦, ) # update accordingly by indonesian lm
decoder = CausualLM(lmcfg)
lm = RobertaForCausalLM.from_pretrained(βcahya/roberta-base-indonesian-1.5Gβ)
decoder = load_wts(decoder, lm)
def load_wts(decoder, lm):
param_name_dict = {
βattention.self.query.weightβ: βself_attn.q_proj.weightβ,
βattention.self.query.biasβ: βself_attn.q_proj.biasβ,
βattention.self.key.weightβ: βself_attn.k_proj.weightβ,
βattention.self.key.biasβ: βself_attn.k_proj.biasβ,
βattention.self.value.weightβ: βself_attn.v_proj.weightβ,
βattention.self.value.biasβ: βself_attn.v_proj.biasβ,
βattention.output.dense.weightβ: βself_attn.out_proj.weightβ,
βattention.output.dense.biasβ: βself_attn.out_proj.biasβ,
βattention.output.LayerNorm.weightβ: βself_attn_layer_norm.weightβ,
βattention.output.LayerNorm.biasβ: βself_attn_layer_norm.biasβ,
βoutput.LayerNorm.weightβ: βfinal_layer_norm.weightβ,
βoutput.LayerNorm.biasβ:βfinal_layer_norm.biasβ,
βword_embeddings.weightβ:βembed_tokens.weightβ,
βposition_embeddings.weightβ: βembed_positions.weightβ,
βLayerNorm.weightβ:βlayernorm_embedding.weightβ,
βLayerNorm.biasβ: βlayernorm_embedding.biasβ
}
wts = lm.state_dict()
new_wts = {}
dwts = decoder.state_dict()
for key in wts.keys():
nkey = rename_param(key)
if nkey:
new_wts[nkey] = wts[key]
decoder.load_state_dict(new_wts, strict=False)
return decoder
As paper mentioned, this does not update weights in the encoder-decoder attention layers
since they do not exist in Roberta language model.
If you have any advice, please let me know. Thank you a lot.<|||||>> You're in luck, I just uploaded a notebook showcasing how to fine-tune TrOCR on a custom dataset.
>
> You can find it here: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TrOCR/Fine_tune_TrOCR_on_IAM_Handwriting_Database_using_Seq2SeqTrainer.ipynb
>
> Regarding documentation, this can be found in the docs of `VisionEncoderDecoderModel` [here](https://huggingface.co/transformers/master/model_doc/visionencoderdecoder.html) - soon available in the new v4.12 release.
Are there any pretrained models of Japanese, Korean, etc.? Or how to train them? [email protected]<|||||>Hi!
If you want to train a TrOCR-like model on another language, you can initialize (also called "warm-start") the weights of the encoder and decoder of a `VisionEncoderDecoderModel` with those of any compatible checkpoint available on the hub. One then only adds randomly initialized cross-attention layers in the decoder, which need to be fine-tuned on a supervised dataset of image-text pairs.
For instance, let's say you want to do OCR in Japanese, then you can initialize a `VisionEncoderDecoderModel` with ViT as encoder and https://huggingface.co/cl-tohoku/bert-base-japanese as decoder. This can be done as follows:
```
from transformers import VisionEncoderDecoderModel
model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained("google/vit-base-patch16-224-in21k", "cl-tohoku/bert-base-japanese")
# ensure that randomly initialized cross-attention layers are added
assert model.config.decoder.is_decoder is True
assert model.config.decoder.add_cross_attention is True
```<|||||>Thank you for your reply.
I wonder whether this tr is same as below link. Below link mainly uses [libtr.so or dll](https://github.com/myhub/tr/blob/master/tr/libtr.so) to realize Chinese ocr function, it is small and fast, any similar method or tool for Japanese, Korean?
https://github.com/myhub/tr<|||||>> Hi,
>
> Sorry for the late reply, but yes you can combine a vision encoder (like `google/vit-base-patch16-in21k`) with a language decoder in a different language (like `neuralmind/bert-base-portuguese-cased`), assuming you have enough data to fine-tune the model on image -portuguese text pairs.
How about handwritten text in indonesian?<|||||>You can check languages here: https://huggingface.co/languages.
If you check Indonesian, there are currently 135 models available for this language. An example could be this one: https://huggingface.co/indobenchmark/indobert-base-p1<|||||>How to train crnn ocr model with transformer? [email protected]<|||||>> You can check languages here: https://huggingface.co/languages.
>
> If you check Indonesian, there are currently 135 models available for this language. An example could be this one: https://huggingface.co/indobenchmark/indobert-base-p1
Am I doing right? @NielsRogge
```python
from transformers import ViTFeatureExtractor, RobertaTokenizer, TrOCRProcessor
encode = 'google/vit-base-patch16-224-in21k'
decode = 'cahya/roberta-base-indonesian-1.5G'
feature_extractor=ViTFeatureExtractor.from_pretrained(encode)
tokenizer = RobertaTokenizer.from_pretrained(decode)
processor = TrOCRProcessor(feature_extractor=feature_extractor, tokenizer=tokenizer)
from transformers import TrOCRProcessor
train_dataset = IAMDataset(root_dir='../dataset/',
df=train_df,
processor=processor)
eval_dataset = IAMDataset(root_dir='../dataset/',
df=test_df,
processor=processor)
model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained(encode, decode)
# set special tokens used for creating the decoder_input_ids from the labels
model.config.decoder_start_token_id = processor.tokenizer.cls_token_id
model.config.pad_token_id = processor.tokenizer.pad_token_id
# make sure vocab size is set correctly
model.config.vocab_size = model.config.decoder.vocab_size
config_decoder.is_decoder = True
config_decoder.add_cross_attention = True
# set beam search parameters
model.config.eos_token_id = processor.tokenizer.sep_token_id
model.config.max_length = 64
model.config.early_stopping = True
model.config.no_repeat_ngram_size = 3
model.config.length_penalty = 2.0
model.config.num_beams = 4
```<|||||>hello @NielsRogge, I'm trying to train TrOCR model for Arabic handwritten OCR, the model and preprocessor are as follows:
```
def load_model(from_disk: bool) -> VisionEncoderDecoderModel:
model: VisionEncoderDecoderModel = VisionEncoderDecoderModel.from_encoder_decoder_pretrained("google/vit-base-patch16-224-in21k", "bhavikardeshna/xlm-roberta-base-arabic")#.from_pretrained(model_path)
print(f"Using device {device}.")
model.to(device)
return model
def init_model_for_training(model: VisionEncoderDecoderModel, processor: TrOCRProcessor):
model.config.decoder_start_token_id = processor.tokenizer.cls_token_id
model.config.pad_token_id = processor.tokenizer.pad_token_id
model.config.vocab_size = model.config.decoder.vocab_size
model.config.bos_token_id = processor.tokenizer.bos_token_id
model.config.decoder_start_token_id = 0
model.config.decoder.is_decoder = True
model.config.decoder.add_cross_attention = True
def load_processor() -> TrOCRProcessor:
feature_extractor=ViTFeatureExtractor.from_pretrained("google/vit-base-patch16-224-in21k")
model_path = "bhavikardeshna/xlm-roberta-base-arabic"
tokenizer = AutoTokenizer .from_pretrained(model_path)
return TrOCRProcessor(feature_extractor=feature_extractor, tokenizer=tokenizer)
```
However, when I check the validation set predictions It's all in English like garbage.
what is the problem here?
<|||||>Hi,
thanks for your interest in TrOCR! So you're instantiating the weights of the decoder with those of [bhavikardeshna/xlm-roberta-base-arabic](https://huggingface.co/bhavikardeshna/xlm-roberta-base-arabic), which looks like a multilingual model (XLM-RoBERTa), fine-tuned on an Arabic question answering dataset. I'd advise here to use a monolingual (Arabic-only) model instead.
One option here is AraBERT: https://huggingface.co/aubmindlab/bert-base-arabertv2. This seems to be one of the most popular Arabic-only text models.
Also note that in your case, instantiating the model as follows:
```
from transformers import VisionEncoderDecoderModel
model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained("google/vit-base-patch16-224-in21k", "aubmindlab/bert-base-arabertv2")
```
is sufficient, you don't need to do `model.config.decoder.is_decoder = True` and `model.config.decoder.add_cross_attention = True` as the cross-attention layers are already automatically added (you can see that in the warning, which tells you that those layers will have randomly initialized weights).
I'd also check the data preparation really well (make sure to decode the labels just before feeding them to the model), to make sure you're really preparing the data appropriately.<|||||>Hi @NielsRogge .. Question please what did you mean by decoding the labels before feeding them to the model? should it be feed to the model in the training as text?<|||||>Also please, the MSE loss calculated basically by the model is so small while finetuning on a handwritten Arabic dataset (tends to 0) given a high CER, WER..
when I tried the crossentropy the loss values are huge values (number of 8 digits) so what could this behavior indicate?<|||||>Hi @NielsRogge,
When I train the model using huggingface intergrated deepspeed. There is an error.
`AttributeError: 'VisionEncoderDecoderConfig' object has no attribute 'hidden_size'`
Do you know how to config the hidden_size to avoid the error?<|||||>Hi @NielsRogge,
Congrats for the Great work for State of Art Tr-OCR, i have few questions,
1.How can be tune the TrOCR to perform on multi lines document,as currently it works only on single line text.
2.How can we train on HIndi Language
3.How can we export the OCR model to ONNX.
4.How to control latency as I was trying to do some batch processing on passing ROIs tensors stacked to the Base printed model
Thanks<|||||>Hi,
> 1.How can be tune the TrOCR to perform on multi lines document,as currently it works only on single line text.
I guess you can try fine-tuning the model on (image, multi-line text) pairs, and see whether the model is able to pick that up. Note that pre-training only happened on single-line text images.
> 2.How can we train on HIndi Language
See the thread above, it comes down to replacing the text decoder with a pre-trained one from the hub and fine-tune the model on (image, hindi text) pairs
> 3. How can we export the OCR model to ONNX.
We've recently added ONNX support for the VisionEncoderDecoderModel class, check details here: https://github.com/huggingface/transformers/pull/19254.
> 4.How to control latency as I was trying to do some batch processing on passing ROIs tensors stacked to the Base printed model
I'd recommend taking a look at our Optimum library, which allows to optimize/quantize Transformer-based models such as TrOCR: https://huggingface.co/docs/optimum/index<|||||>Thanks @NielsRogge for answering really appreciate your work, but can you please elaborate for 1 and 2 point above or provide some good references.<|||||>Has anyone achieved a successful implementation of this method for other languages rather than English? I have come across some difficulty in my efforts and, would appreciate some feedback as to whether its a viable option to pursue.<|||||>@vick998 well I am currently working on support to Multiple text line recognition support for Tr-OCR (English) and improving model inference. For other languages I would suggest it's worth trying focusing on pretrained language decoder and fine tuning the model in image -text pairs
@NielsRogge Can eloborate on other language model development <|||||>I have fine tuned the trocr small printed model on a custom dataset. the pytorch model provides a good accuracy and shows a decrease in CER. I want to convert the same to ONNX. After converting the model to onnx, my accuracy decreases by more than 20 %. Can anyone explain why this is happening.
Thanks
|
transformers | 14,194 | closed | AttributeError: 'str' object has no attribute 'squeeze' | Hello guys, I am new to Hugging Face. I was running it on my Ubuntu 18, in Jupyter notebook.
I was running the example case (tutorial) for the https://huggingface.co/neuraly/bert-base-italian-cased-sentiment model. But I get this error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/tmp/ipykernel_29169/2345971003.py in <module>
1 # Remove the fake batch dimension
----> 2 logits = logits.squeeze(0)
3
4 # The model was trained with a Log Likelyhood + Softmax combined loss, hence to extract probabilities we need a softmax on top of the logits tensor
5 proba = nn.functional.softmax(logits, dim=0)
AttributeError: 'str' object has no attribute 'squeeze'
```
When I inspect the value of the logits variable, its content is the string "logits".
## Environment info
- `transformers` version: 4.11.3
- Platform: Linux-5.4.0-89-generic-x86_64-with-glibc2.27
- Python version: 3.9.7
- PyTorch version (GPU?): 1.10.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Nope
- Using distributed or parallel set-up in script?: Nope
### Who can help
Models:
- neuraly/bert-base-italian-cased-sentiment: @gianpy15
Library:
- Tokenizers: @LysandreJik
Documentation: @sgugger
-->
## Information
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behaviour:
1. Run the example in the tutorial.
| 10-28-2021 14:50:09 | 10-28-2021 14:50:09 | That code can't work since Transformers v4
```
# Call the model and get the logits
logits, = model(tensor)
```
you need to do
```
# Call the model and get the logits
logits = model(tensor).logits
```
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,193 | closed | Adding support for `truncation` parameter on `feature-extraction` pipeline. | Fixes #14183
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 10-28-2021 14:20:41 | 10-28-2021 14:20:41 | Can you also include `padding` as well? Since we're extracting features, I'd like to be able to specify both padding and truncation strategies. Thanks! @Narsil <|||||>Padding for pipelines is something, I would like to keep orthogonal to business logic (See https://github.com/huggingface/transformers/pull/13724).
It's more batching than padding, but I imagine you pad mostly for the batch.
Currently, pipelines do not batch ever, meaning the padding is not used. Would adding padding change the results of `feature-extraction`?<|||||>Only slightly, I think. Right now you get embeddings of varying size depending on the size of your input sequence. If you want to use somehow these embeddings in a downstream task, it's weird to have varying size. In my case, I think I'm going to use only the embedding corresponding to the CLS, so I'm good. <|||||>@Narsil Note that padding is supported in other pipelines. I think as a user, it's maddening to have varied behavior depending on which pipeline you use. I personally think that lack of consistency across pipelines is problematic. Take a look at this pipeline code, note there is logic around padding:
https://github.com/huggingface/transformers/blob/026866df92afe40cdf928839864111015a62d3b5/src/transformers/pipelines/text2text_generation.py#L92
<|||||>> it's maddening to have varied behavior depending on which pipeline you use.
You're 100% correct, that's part of the reason of the large rewrite which is happening.
For instance, the rewrite enables you to write either `pipeline(..., truncation=True)` or `pipe = pipeline(..); pipe(..., truncation=True)`.
And that for all pipelines, and all parameters. This was far from the case before.
If anything dropping padding from the code you're quoting would be the way to go. (At least, deprecating it first, we have to maintain compatibility as much as possible). This code is currently legacy, and should be rewritten sometime in the future. The thing is there are a couple of directions to be considered for `text2text-generation` and we're also trying to align pipelines with other libraries. (https://github.com/huggingface/huggingface_hub/tree/main/api-inference-community)
Padding, is like batching, it was very spurious support across pipelines, we're closing the gap, but it takes time, and backward compatibility is important. The core idea is to get orthogonal behavior whereever possible. So as much as possible, individual pipelines should NOT handle them, all this logic should be enabled in the parent class. Not all models are even capable of padding (`gpt2` for instance).
Truncation for instance, is not orthogonal, since `question-answering` and `zero-shot-classification` will handle long prompts by chunking the input. Some pipelines input cannot really be chunked: `summarization` for instance uses and encoder-decoder, if the prompt does not fit the size of the model, then the summary cannot realistically chunk (or it will come with its own set of drawbacks let's say).
That's also the reason why adding new parameters is something we try to think about before jumping to it.
`truncation` in `feature-extraction` is important because afaik, sentence embeddings do use the feature-extraction capability, and missing the last part of a sentence is indeed OK in a lot of cases (you want to use only the first token embedding, and missing part of the sentence is OK since it's only about matching later). It still needs to be opt-in as you need to explicitly know you can miss part of the sentence. Ideally, we would also prompt a warning since we're ignoring part of the sentence. And since a user sending text has no idea how long it is token-wise, it would be better to tell which part of the sentence is being chunked.
Hope this clears a bit what's going on.
Happy to receive feedback here too. |
transformers | 14,192 | closed | Add audio-classification benchmarking results | # What does this PR do?
Adds results for DistilHuBERT and SEW to the audio classification example results | 10-28-2021 12:42:08 | 10-28-2021 12:42:08 | Nice! |
transformers | 14,191 | closed | Fix SEW-D implementation differences | # What does this PR do?
This sets the default activation function for SEW-D to `gelu_python` and the `layer_norm_eps` of the DeBERTa layer to `1e-7` to reproduce the model more closely.
cc @LysandreJik | 10-28-2021 12:38:05 | 10-28-2021 12:38:05 | @LysandreJik - would be nice if we could merge this before the release. Without this PR we are getting significant differences in the output logits between orginial SEW-D and our SEW-D version (up to 5e0). |
transformers | 14,190 | closed | [GPTJ] enable common tests and few fixes | # What does this PR do?
Currently, GPTJ does not run common tests because its testes class does not subclass `ModelTesterMixin, GenerationTesterMixin`. This PR enables common tests for GPTJ and fixes a few things along the way.
I've run the slow tests manually and verified that they pass.
Thanks a lot, @sgugger for spotting this!
cc @StellaAthena
Fixes #14107
| 10-28-2021 12:18:06 | 10-28-2021 12:18:06 | @patil-suraj For the specific issue of `resize_token_embeddings` on gptj (#14107), I got it to work by changing two methods in modeling_gptj.py below. I'm not sure this is right because the model needs to train 2 epochs to get a good result whereas I almost always need just 1 epoch with other model types (gpt-2, gpt-neo, etc.). Will this PR cover the `resize_token_embeddings` issue? It doesn't seem to make changes to these methods.
```
def get_output_embeddings(self):
return self.lm_head
def set_output_embeddings(self, new_embeddings):
self.lm_head = new_embeddings
```<|||||>> @patil-suraj For the specific issue of `resize_token_embeddings` on gptj (#14107), I got it to work by changing two methods in modeling_gptj.py below. I'm not sure this is right because the model needs to train 2 epochs to get a good result whereas I almost always need just 1 epoch with other model types (gpt-2, gpt-neo, etc.). Will this PR cover the `resize_token_embeddings` issue? It doesn't seem to make changes to these methods.
>
> ```
> def get_output_embeddings(self):
> return self.lm_head
>
> def set_output_embeddings(self, new_embeddings):
> self.lm_head = new_embeddings
> ```
@patil-suraj Nevermind! It looks like you caught this and also made additional changes!<|||||>Looks good to me too! |
transformers | 14,189 | closed | T5-v1.1 loss go to nan when fp16 training was enabled | ## Environment info
I test in two different environments. One is my native env, one is nvidia container pytorch_21.09.
For more details, please refer https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel_21-09.html#rel_21-09
- `transformers` version: 4.11.3
- Platform: Arch Linux 5.14.14-arch1-1 (Ubuntu 20.04)
- Python version: 3.9.7 (3.8)
- PyTorch version (GPU?): 1.9.1 (1.10a)
- Tensorflow version (GPU?): 2.6.0 (did not use)
- Using GPU in script?: 2080Ti (V100)
- Using distributed or parallel set-up in script?: using fp16
### Who can help
@patrickvonplaten, @patil-suraj
## Information
Model, I am using `t5-v1.1 (small, base)` with mix-precision, loss would go to `nan`.
The problem arises when using:
* [x] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
**The bug can be reproduced with run_summarization & run_summarization_no_trainer.py**
## To reproduce
Steps to reproduce the behavior:
1.β―
Both the following scrips can reproduce the results
```bash
python run_summarization.py \
--fp16 --fp16_backend apex (both native amp & apex face thes same issue)\
--model_name_or_path google/t5-v1_1-base \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=2 \
--per_device_eval_batch_size=2 \
--overwrite_output_dir \
```
```bash
accelerate launch --fp16 run_summarization_no_trainer.py \
--model_name_or_path google/t5-v1_1-base \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--per_device_train_batch_size=2 \
--output_dir ~/tmp/tst-summarization \
```
2. If you print the loss step by step, you will find out loss goes to `nan`.
(for Trainer, I print the loss before trainer.trainig_step return)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Possible Reason
In https://github.com/huggingface/transformers/pull/10496, models clamp inf values only when `hidden_states.dtype == torch.float16.`
However, even when fp16 training is enabled, the `hidden_states.dtype is still torch.float32`. This might be due to the layer_norm operation.
Here are some more informations that might be useful to you.
When using BART and T5 with fp16 training, the `hidden_states.dtype is still torch.float32`, however; their loss won't go to `nan`.
| 10-28-2021 10:26:33 | 10-28-2021 10:26:33 | Linked PR https://github.com/huggingface/transformers/pull/10956<|||||>To be honest, I think we should just not do T5 training in fp16...cc @stas00
Related issues https://github.com/huggingface/transformers/issues/10830<|||||> As suggested by Lysandre, @Liangtaiwan please check if this PR helps: https://github.com/huggingface/transformers/pull/10956
<|||||>@stas00 @patrickvonplaten @LysandreJik
PR #10956 does prevent T5 from going nan and achieving a comparable result in fp32.
Close the issue and move to PR #10956 to discuss. <|||||>I am working with @HaokunLiu on a project that uses T5 and he found a great solution to this problem. The idea is to scale down the weights of the model in a specific pattern that maintains the relationship between the weights. I am not sure if this transformation is loss-preserving, but `logits.argmax` should remain the same.
Here's his script
```
import torch
from transformers import T5ForConditionalGeneration
emb_scaling = 1 / 32.0
att_v_scaling = 1 / 4.0
att_o_scaling = 1 / 8.0
ff_wi_scaling = 1 / 4.0
ff_wo_scaling = 1 / 4.0
ff_ln_scaling = 1 / 2.0
assert att_v_scaling * att_o_scaling == emb_scaling
assert ff_wi_scaling * ff_wo_scaling * ff_ln_scaling == emb_scaling
new_model = T5ForConditionalGeneration.from_pretrained('t5-base')
with torch.no_grad():
new_model.shared.weight *= emb_scaling
for unit in new_model.encoder.block:
unit.layer[0].SelfAttention.v.weight *= att_v_scaling
unit.layer[0].SelfAttention.o.weight *= att_o_scaling
unit.layer[1].DenseReluDense.wi.weight *= ff_wi_scaling
unit.layer[1].DenseReluDense.wo.weight *= ff_wo_scaling
unit.layer[1].layer_norm.weight *= ff_ln_scaling
for unit in new_model.decoder.block:
unit.layer[0].SelfAttention.v.weight *= att_v_scaling
unit.layer[0].SelfAttention.o.weight *= att_o_scaling
unit.layer[1].EncDecAttention.v.weight *= att_v_scaling
unit.layer[1].EncDecAttention.o.weight *= att_o_scaling
unit.layer[2].DenseReluDense.wi.weight *= ff_wi_scaling
unit.layer[2].DenseReluDense.wo.weight *= ff_wo_scaling
unit.layer[2].layer_norm.weight *= ff_ln_scaling
new_model.lm_scale_modifier /= emb_scaling
new_model.save_pretrained('t5-base-fp16-fixed')
```
in `__init__`
https://github.com/huggingface/transformers/blob/84ea427f460ffc8d2ddc08a341ccda076c24fc1f/src/transformers/models/t5/modeling_t5.py#L1461
you need to add:
```
self.lm_scale_modifier = nn.Parameter(torch.ones(config.d_model))
```
then in the `forward`
https://github.com/huggingface/transformers/blob/84ea427f460ffc8d2ddc08a341ccda076c24fc1f/src/transformers/models/t5/modeling_t5.py#L1640
function you need the following lines here
```
sequence_output = sequence_output * self.lm_scale_modifier # new code
lm_logits = self.lm_head(sequence_output) # existing code
```<|||||>@ibeltagy @HaokunLiu
Interesting, it seems we have similar ideas!
My approach is slightly different, but seems to be working as well. Where yours scales down all the weights, mine aims to change the weights as little as possible.
The weights to change are found using a search pattern (going through the encoder layers, then decoder layers), by scaling down the weights until it is able to infer and train without NaN. I have found changing the weights of the FFN in the last few encoder layers (about 3%-5% of the total model weights) is sufficient, and we can just scale it down by a factor of 2.
At least on the model's existing pre-trained tasks, it still seems to be more or less still working, so I'm taking that as a good sign. I have also fine-tuned on my own task without NaN so far. (Tested t5-large and t5-3B)
Example: https://github.com/tlkh/t5-fp16-surgery/blob/main/t5-3B.ipynb
GitHub repo: https://github.com/tlkh/t5-fp16-surgery
<|||||>> I am not sure if this transformation is loss-preserving
It is loss preserving. The last line `new_model.lm_scale_modifier /= emb_scaling` scales up the hidden states after the last layer (before `lm_head`) to counter the scaling down of the weights, thus keeping the transformation loss-preserving. This requires a small change in the T5 code to support `lm_scale_modifier`.<|||||>@ibeltagy Thank you so much for sharing this!
Did you by any chance check if those changes + applying fp16 while finetuning on a downstream task yield similar results as finetuning the vanilla model w/o fp16? |
transformers | 14,188 | closed | [Flax] Add Flax implementation of `BlenderbotSmallModel` | # π Feature request
Add Flax implementation of `BlenderbotSmall` models.
## Motivation
Improve the level of Flax/Jax support.
## Your contribution
I'll be happy to do it once the problems in #13633 will be resolved and the PR will be merged.
@patrickvonplaten | 10-28-2021 09:57:21 | 10-28-2021 09:57:21 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>waiting for #13633 |
transformers | 14,187 | closed | ValueError when converting dialogpt to onnx format | ## Environment info
- `transformers` version: 4.9.2
- Platform: Darwin-20.5.0-x86_64-i386-64bit
- Python version: 3.7.11
- PyTorch version (GPU?): 1.9.0 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
@patrickvonplaten, @LysandreJik
## Information
Model I am using ([dialogpt](https://huggingface.co/microsoft/DialoGPT-medium)):
when convert dialogpt to onnx, i meet ValueError:
`ValueError: The type of axis index is expected to be an integer
`
## To reproduce
```
from transformers.convert_graph_to_onnx import convert
from pathlib import Path
convert(framework="pt", model="DialoGPT-medium/", output=Path("onnx/dilogpt.onnx"), opset=11)
```
| 10-28-2021 09:30:49 | 10-28-2021 09:30:49 | Gently pinging @michaelbenayoun here<|||||>Gently pinging @lewtun here<|||||>Thanks for the ping! Will take a look :)<|||||>Hey @qiuxia-alone, thank you for raising this issue! The ONNX export API was overhauled in `transformers` v4.9.0 ([link](https://github.com/huggingface/transformers/releases/tag/v4.9.0)), and since then the recommended way to export models is via the `transformers.onnx` package.
For example, one can export the `DialoGPT-medium` checkpoint as follows:
```python
python -m transformers.onnx --model=microsoft/DialoGPT-medium onnx/ --opset 11
```
Does running the above command solve your issue? You can find more information in the docs of your `transformers` version [here](https://huggingface.co/transformers/v4.9.2/serialization.html#exporting-transformers-models).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,186 | closed | Replace assertions with RuntimeError exceptions | Replaces the assertions in integrations.py with RuntimeError exceptions.
Contributes towards fixing issue #12789
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 10-28-2021 05:56:32 | 10-28-2021 05:56:32 | |
transformers | 14,185 | closed | Can we save tokenized datasets? | It takes a lot of time to tokenize my dataset, is there a way to save it and load it?
Let's say I'm using the IMDB toy dataset, How to save the `inputs` object?
```
from datasets import load_dataset
raw_datasets = load_dataset("imdb")
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
inputs = tokenizer(sentences, padding="max_length", truncation=True)
| 10-28-2021 05:44:32 | 10-28-2021 05:44:32 | Sure, that's possible.
You can tokenize your entire dataset using the tokenizer, then save it to disk using the `save_to_disk` method as explained in the [docs](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=save_to_disk#datasets.Dataset.save_to_disk).
Small example:
```
from datasets import load_dataset
from transformers import AutoTokenizer
datasets = load_dataset("imdb")
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
encoded_datasets = datasets.map(lambda examples: tokenizer(examples['text']), batched=True)
encoded_datasets.save_to_disk('.')
```
You can later load it back in using the `load_from_disk` method as explained [here](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=save_to_disk#datasets.Dataset.load_from_disk).<|||||>> Sure, that's possible.
>
> You can tokenize your entire dataset using the tokenizer, then save it to disk using the `save_to_disk` method as explained in the [docs](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=save_to_disk#datasets.Dataset.save_to_disk).
>
> Small example:
>
> ```
> from datasets import load_dataset
> from transformers import AutoTokenizer
>
> datasets = load_dataset("imdb")
> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
>
> encoded_datasets = datasets.map(lambda examples: tokenizer(examples['text']), batched=True)
> encoded_datasets.save_to_disk('.')
> ```
>
> You can later load it back in using the `load_from_disk` method as explained [here](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=save_to_disk#datasets.Dataset.load_from_disk).
Thank you Niels! My problem is a bit more complex, I created a custom dataset using method here: https://github.com/huggingface/notebooks/blob/master/transformers_doc/custom_datasets.ipynb, but the imdbDataset object created doesn't has `save_to_disk` method.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,184 | closed | Argument 'filter_value' for the _get_logits_warper function | # π Feature request
In generation_utils, when sampling (i.e. `is_sample_gen_mode = True`), add the argument `"filter_value"` to `_get_logits_warper` function call.
[https://github.com/huggingface/transformers/blob/3187228206cce052c5df0a8643fe85d2fd50e6a0/src/transformers/generation_utils.py#L1004](https://github.com/huggingface/transformers/blob/3187228206cce052c5df0a8643fe85d2fd50e6a0/src/transformers/generation_utils.py#L1004
)
## Motivation
In all the logits processors, default `filter_value` is `-float("Inf")`. This is very confusing when working with masks, since `-float("Inf") * 0` is NaN, this would results to a loss of Nan.
Use case:
Sampling for RL and calling:
```
out = decoder.generate(
input_ids=torch.ones((batch_size, 1), dtype=torch.long).cuda() * self.bos_token_id,
max_length=100,
num_beams=1,
num_return_sequences=1,
output_scores=True,
return_dict_in_generate=True,
do_sample=True,
)
```
to return scores (logits) and the sampled ids.
but sample() overrides `next_tokens` with `pad_token_id` once `eos_token_id` is encountered.
[https://github.com/huggingface/transformers/blob/3187228206cce052c5df0a8643fe85d2fd50e6a0/src/transformers/generation_utils.py#L1333](https://github.com/huggingface/transformers/blob/3187228206cce052c5df0a8643fe85d2fd50e6a0/src/transformers/generation_utils.py#L1333)
Therefore, if you `torch.gather` the logits according the sampled ids (to get the sampled logits), you might gather a `-float('inf')` (if the logit processor did put `pad_token_id` to `-float('inf')`).
```
samples_ids = out.sequences[:, 1:]
logits = torch.stack(out.scores, dim=1)
sampled_logits = logits.gather(2, samples_ids.unsqueeze(-1))
```
Even if you mask your sampled_logits, you still get the NaN.
| 10-28-2021 00:24:44 | 10-28-2021 00:24:44 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@jbdel - I'm not sure our `generate()` method would work well with RL to be honest. Also, do you have an idea what we could improve to fix the problem? Could you maybe provide a minimum reproducible code snippet that I could run to better understand the problem?<|||||>hey @patrickvonplaten, could you expand on why you think generate() would not work well (provided one removes the @torch.no_grad() decorator) ?
To my understanding, If you do sampling and then get the logits of the sampled ids then you can use this logits in a loss and back propagate the gradient (i'm referring to this [paper](https://arxiv.org/abs/1612.00563)).
Should I open a new thread for this question ?
I don't really have a minimal code because this only occurs in special condition being:
- you use `generate()` using `top_k` argument
- `TopKLogitsWarper` puts the logprobs `-float("Inf")` for the non top-k before multinomial sampling
- For a finished sequence in the batch, `generate()` does override the sampled input_id with pad_token_id
- If `TopKLogitsWarper` assigned the logprob `-float(inf)` to `pad_token_id` in these situation, then when you get the logits of your sampled sentences, you can gather some `-float(inf)`, that breaks the loss computation.
My idea would be to make it possible to change the "filter_value" of TopKLogitsWarper
[https://github.com/huggingface/transformers/blob/master/src/transformers/generation_logits_process.py#L229](https://github.com/huggingface/transformers/blob/master/src/transformers/generation_logits_process.py#L229)<|||||>Hey @jbdel,
Thanks a lot for the clarification! So in order for your code to work you would just need a way to costumize the `"filter_value"` ? So if we would add a `"top_k_filter_value"` argument to `generate()` your code would work? I think in this case we can find definitely some kind of general solution.<|||||>Its tricky.
Using a `filter_value` of -inf is the right way to proceed (as it is right now) because then you are sure non-topk will not be sampled by the multinoulli. So that shouldn't be messed with.
The filtering should rather be done on the logits of the sampled sequences, making sure they do not contain a -inf logprob (as it sometimes happens my scenario). I think replacing -inf with 0 might be a good idea...
Sorry to bounce back also on my question, but is there any specific reason you think `generate` should not be used for RL or is it just a general feeling.
JB
<|||||>Hey @jbdel,
We can definitely add a new logits processor class that replaces `-inf` with `0`. It should actually be enough to adapt this class https://github.com/huggingface/transformers/blob/c1125dc2ba9f3c383bf860ac9fcd67268385ad8d/src/transformers/generation_logits_process.py#L608 so that the user could decide to replace `inf` with 0 (or -inf) instead.
Feel free to play around with that to see whether it would help your case.
The reason why I said `generate()` "should" not be used for RL was simply that we've never tested it with RL, but if it works for you even better!<|||||>Hey Patrick,
This is actually great. I was indeed doing :
`logits[logits == -float("Inf")] = 0.`
afterward, so if it can be embedded in a processor then its great :)
I didn't fully validate my experiments but just ran one epoch of SCST training optimizing the ROUGE score using `generate()` and did observe an increase of 6 points. So the function should be usable for RL provided you remove the `no_grad` decorator.
JB<|||||>Awesome - great to hear |
transformers | 14,183 | closed | Pipeline feature extraction: tensor size mismatch | ## Environment info
I'm using pipelines for the first time with feature extraction, it seems to work fine for my toy samples that I used to debug the code. However, when I started running with some real data, I got the following stack trace:
```
File "../../models//text_embeddings_feature_extraction.py", line 215, in main
outputs = feature_extractor(predict_text, padding=True, truncation=TruncationStrategy.ONLY_FIRST)
File "/dccstor/redrug_ier/envs/mimic/lib/python3.8/site-packages/transformers/pipelines/feature_extraction.py", line 73, in __call__
return super().__call__(*args, **kwargs)
File "/dccstor/redrug_ier/envs/mimic/lib/python3.8/site-packages/transformers/pipelines/base.py", line 908, in __call__
outputs = [output for output in final_iterator]
File "/dccstor/redrug_ier/envs/mimic/lib/python3.8/site-packages/transformers/pipelines/base.py", line 908, in <listcomp>
outputs = [output for output in final_iterator]
File "/dccstor/redrug_ier/envs/mimic/lib/python3.8/site-packages/transformers/pipelines/base.py", line 631, in __next__
item = next(self.iterator)
File "/dccstor/redrug_ier/envs/mimic/lib/python3.8/site-packages/transformers/pipelines/base.py", line 632, in __next__
processed = self.infer(item, **self.params)
File "/dccstor/redrug_ier/envs/mimic/lib/python3.8/site-packages/transformers/pipelines/base.py", line 871, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "/dccstor/redrug_ier/envs/mimic/lib/python3.8/site-packages/transformers/pipelines/feature_extraction.py", line 53, in _forward
model_outputs = self.model(**model_inputs)
File "/dccstor/redrug_ier/envs/mimic/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/dccstor/redrug_ier/envs/mimic/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 988, in forward
embedding_output = self.embeddings(
File "/dccstor/redrug_ier/envs/mimic/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/dccstor/redrug_ier/envs/mimic/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 221, in forward
embeddings += position_embeddings
RuntimeError: The size of tensor a (578) must match the size of tensor b (512) at non-singleton dimension 1
```
To me, it looks like a truncation issue. The embedding is longer than the max 512 (I'm using a BERT model). I have long samples in my data. I tried both `truncation = True` and `truncation = TruncationStrategy.ONLY_FIRST` as in the code below, both ended up with the same error. Not sure why this happens
I'm tagging tokenizers and pipelines...
- Tokenizers: @LysandreJik
- Pipelines: @Narsil
Thanks in advance!
===========
Steps to reproduce the behavior:
Here it is pretty much all my code:
```
# Load pretrained model and tokenizer
#
# In distributed training, the .from_pretrained methods guarantee that only one local process can concurrently
# download model & vocab.
# num_labels hard-coded; shouldn't be used
num_labels = 2
config = AutoConfig.from_pretrained(
model_args.config_name if model_args.config_name else model_args.model_name_or_path,
num_labels=num_labels,
finetuning_task=None, # no task
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
output_hidden_states=True
)
tokenizer = AutoTokenizer.from_pretrained(
model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
cache_dir=model_args.cache_dir,
use_fast=model_args.use_fast_tokenizer,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
)
predict_dataset = raw_datasets["test"]
if data_args.max_predict_samples is not None:
predict_dataset = predict_dataset.select(range(data_args.max_predict_samples))
# extract only the text
predict_text = []
for item in predict_dataset:
predict_text.append(item['text'])
# Log a few random samples from the testing set:
for index in random.sample(range(len(predict_dataset)), 3):
logger.info(f"Sample {index} of the test set: {predict_dataset[index]}.")
logger.info("*** Generate embeddings ***")
# use pipelines and feature extraction
feature_extractor = pipeline(
task="feature-extraction", model=model_args.model_name_or_path, config = config, tokenizer = tokenizer, framework="pt"
)
outputs = feature_extractor(predict_text, padding=True, truncation=TruncationStrategy.ONLY_FIRST)
``` | 10-27-2021 22:38:42 | 10-27-2021 22:38:42 | I'm pretty sure that's the problem. I managed to reproduce with something like this:
```
# debug
predict_text = "this " * 1000
print(predict_text)
outputs = feature_extractor(predict_text, padding="longest", truncation=TruncationStrategy.ONLY_FIRST)
```
I get `RuntimeError: The size of tensor a (1002) must match the size of tensor b (512) at non-singleton dimension 1`<|||||>On a related note, padding also seems to be ignored. I tried this:
```
# debug
predict_text = "this " * 20
print(predict_text)
outputs = feature_extractor(predict_text, padding=True, truncation=True)
print(outputs)
print(len(outputs[0]))
```
In my understanding, this should output a len of 512 (max len in BERT's case), but it gets 22... <|||||>Hello @ioana-blue, thank you for the issue. Could you share your environment details including the transformers version? You can do so by running `transformers-cli env` in your local environment.
Thanks!<|||||>Sure!
- `transformers` version: 4.11.3
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.7<|||||>Meanwhile, I tried adding the params in the tokenizer, also in the pipeline, they seem to be ignored everywhere which is rather strange (both padding and truncation).<|||||>Is it true that any special params are dropped for feature extraction because of this line?
https://github.com/huggingface/transformers/blob/c63fcabfe92229eac983ce743f4e9fce863c84dd/src/transformers/pipelines/feature_extraction.py#L44
Even if this is the case, I don't understand why the `truncation=True` is not sticky from the initialization of the tokenizer. <|||||>I'm pretty sure the method above is buggy and there are no tests to catch this bug. There is a test for summarization here:
https://github.com/huggingface/transformers/blob/879fe8fa75e662ffd85a567a98522bc9cffe0c6c/tests/test_pipelines_summarization.py#L39
And the corresponding method can be found here:
https://github.com/huggingface/transformers/blob/026866df92afe40cdf928839864111015a62d3b5/src/transformers/pipelines/text2text_generation.py#L54
Note the truncation assignment:
https://github.com/huggingface/transformers/blob/026866df92afe40cdf928839864111015a62d3b5/src/transformers/pipelines/text2text_generation.py#L65<|||||>I also think the padding parameter should be copied as part of preprocessing.
If you agree with my assessment, I can try to submit a PR with a few changes: change the _sanitize_parameters in feature extraction and also add a test similar to the summarization text. I haven't submitted a PR so I may need help to follow whatever protocol you have in place. Let me know. <|||||>Am I the only one using the feature extraction pipeline? :) Or the only one with long samples? :) I got really happy when I found out about pipelines, it's a neat concept. <|||||>I also think the preprocessing method needs to change:
https://github.com/huggingface/transformers/blob/c63fcabfe92229eac983ce743f4e9fce863c84dd/src/transformers/pipelines/feature_extraction.py#L47
It needs to take in truncation and padding. <|||||>Hm, we can add the truncation, but it feels a bit odd to just ignore the rest of the sentence, no ?
We already have some pipelines (`question-answering` and `zero-shot-classification`) that are able to chunk their inputs into sub elements for processing in chunks, enabling to process large inputs in a seemless fashion.
Wouldn't that be a better solution ?
The only drawback I can come up with, it the interaction with `aggregation_strategy` where we are attempting to handle word boundaries, so the chunking should aim for "word" boundaries to enable `aggregation_strategy` to work correctly. That might be a bit delicate if the long input does not contain words (imagine protein inputs/models).
<|||||>@Narsil I don't think I understand your comment. LMs are trained with a max length size and they can't support anything bigger. The code will crash as it crashed above. I don't see any other solution than truncating the input to the max of the supported size by the LM. How else is it going to work?<|||||>@Narsil Sure, in the long run, we could split longer sentences in chunks of max seq length, process each chunk, concatenate later. However, this is more involved. Right now, I'd like a version of the code that truncates and doesn't crash. The way it is right now, I can't run my code. I could have a workaround in my code to truncate the text before it's sent to the pipeline, but that's suboptimal because it's hard to estimate how much to truncate since the tokenizer doesn't split at word level,, it works on BPEs (for BERT, IIRC). <|||||>I think it's manageable to do the "better" version earlier.
If you need a quick hack don't hesitate to override the pipeline
```
class MyPipeline(TokenClassificationPipeline):
def preprocess(....)
.....
self.tokenizer(...., truncation=True)
......
```
I do understand this is a hacky patch but should work for you in the mean time if you're ok with it.
The problem with adding that to master, is that it will become a supported option, and we will have to support that for the foreseeable future. It's easier to add the better option first (as it shouldn't be much longer)<|||||>Thanks for suggesting this, I think it's a "clean" hack :)
I personally still think that it would be nice if the truncation would be supported. For whoever doesn't want truncation, sure, go ahead and split in chunks, etc. That's why there is a `truncation` parameter to begin with. There are plenty of cases where one wants just an embedding from the last hidden layer, usually the embedding corresponding to the CLS token. Or even if one is interested in longer embeddings, you usually use these in some downstream task, I see benefit for them to be limited in size and of the same size.
What use case do you have in mind where you think you'd need variable-size, potentially long embeddings?<|||||>Bottom line the code crashes right now as it is for longer sequences, and it needs to be fixed + test cases in the test suite. Locally, I'll do the hack you suggested, that should work for now, thank you!<|||||>Crashing is actually ok, and it's expected, it is the standard behavior of most pipelines when the input cannot be handled in a sane way by models (for instance text-generation).
Sequence length IS a limit of most models, and some pipelines just can't sanely recover from it, it's a real limitation that has to be taken care of.
For `token-classification`, we probably can get away with chunking properly and stop failing (probably will come with it's own set of warnings though).
But, maybe I misunderstood your use case, Do you just want the `embeddings` of the last layer, only for the CLS token, is that right ? What's the downstream use ?
There is `feature-extraction` which should enable you to not do any post-processing and recover the raw embeddings.
If there is a "legitimate" use for the truncation, adding it and supporting it is more than desired, I simply thought it was a workaround for the crashing, not the real goal.<|||||>I am using `feature-extraction` and that's the one crashing. Did you look at my code above in the original posting? @Narsil<|||||>Also all the problems I pointed out with code lines and fixes I was suggesting are in the `feature-extraction` class.<|||||>Brain got confused ! Thought it was `token-classification` for some reason, on `feature-extraction` for sure truncation makes sense !<|||||>@Narsil Now you're making sense to me :) I was surprised that you said it doesn't make sense... Now that we agree, do you have time to fix it, or shall I go ahead and work on it and you can help me with the PR?<|||||>Your hack is still useful and I may try that first to unblock myself. I really want to get these embeddings and while I got them "manually", it's so much nicer to use the pipelines!<|||||>@Narsil I'm not sure if I can use your hack. IIUC, to create a pipeline, you have to provide a task name and the task names are predefined, mapped to predefined pipelines. I don't think providing your own pipeline is supported, right? So in theory, it seems like a good idea, in practice, I don't think it's supported. <|||||>You can call pipelines on their own, it's a bit less convenient,
```python
pipe = MyPipeline(model=AutoModel.from_pretrained(...), tokenizer=AutoTokenizer.from_pretrained(....)
```
There are other ways to hack this, it's python after all.<|||||>Not trying to reopen this and I know this fix should roll out soon, but if you end up on this issue and you can want an environment workaround, 4.10.3 was the last version where truncation passed into a feature extraction pipeline just works.<|||||>I think the test that was added should have a longer string. I think right now it's under 512 which is usually the max length and I don't think it's testing the truncation.
On my local test, I had to pass in a parameter to the tokenizer for the `model_max_length` - for reasons I didn't understand, it didn't get it from the config file from the model. |
transformers | 14,182 | closed | Add BigBird/BigBird Pegasus to models exportable with ONNX | This PR adds BigBird and BigBird Pegasus to models exportable with ONNX and also adds them in test_onnx_v2.py
| 10-27-2021 20:59:53 | 10-27-2021 20:59:53 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,181 | closed | [modeling_utils] respect original dtype in _get_resized_lm_head | Currently `resize_token_embeddings`'s `_get_resized_lm_head` doesn't preserve the original dtype when creating a new param. This PR fixes it.
The fix is identical to:
https://github.com/huggingface/transformers/blob/232822f36d49598e68e152a9ca0a6d90be6f54b5/src/transformers/modeling_utils.py#L792-L794
The problem was detected by @VictorSanh who used https://huggingface.co/google/t5-v1_1-small w/ deepspeed z3 - this model is different from t5 as it has `"tie_word_embeddings": false`, so it goes through a different code path.
I will have a deepspeed test covering this path once a new tiny model is added to https://huggingface.co/hf-internal-testing/ to cover this variation of t5 models.
@sgugger, @LysandreJik | 10-27-2021 20:25:58 | 10-27-2021 20:25:58 | |
transformers | 14,180 | closed | Generalize problem_type to all sequence classification models | # What does this PR do?
This PR makes each sequence classification model use the `problem_type` attribute of the config and infer a default coming from the labels when it's not set.
Deberta implemented its own loss, apparently dealing with one-hot encoded labels, which we keep when `problem_type` is None for backward compatibility. | 10-27-2021 15:39:51 | 10-27-2021 15:39:51 | Relevant to #13370 |
transformers | 14,179 | closed | one of the variables needed for gradient computation has been modified by an inplace operation | It occured when i forward a bert model twice and backward loss.
```
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.LongTensor [1, 20]] is at version 4; expected version 3 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
File "/share/xxx/env/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/share/xxx/env/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 511, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/share/xxx/env/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/mnt/xxx/xxx/pytorch_train/image_clas/models/text_relevance_model.py", line 100, in forward
bert_output = self.bert_model(**bert_input)
File "/share/xxx/env/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/share/xxx/env/anaconda3/envs/pytorch/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py", line 989, in forward
past_key_values_length=past_key_values_length,
File "/share/xxx/env/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/share/xxx/env/anaconda3/envs/pytorch/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py", line 220, in forward
position_embeddings = self.position_embeddings(position_ids)
File "/share/xxx/env/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
```
it related to https://github.com/huggingface/transformers/issues/11941, how can i solve it ? | 10-27-2021 15:35:19 | 10-27-2021 15:35:19 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,178 | closed | [Pipelines] Fix ASR model types check | # What does this PR do?
This changes `list` to `dict` in the ASR pipeline's `check_model_type()`.
Previously, a list was passed to the checker, so the model name extraction was never preformed [here](https://github.com/huggingface/transformers/blob/8ddbfe975264a94f124684a138a2a5ca89a2bd0d/src/transformers/pipelines/base.py#L810) and an error was raised:
```
The model 'SEWDForCTC' is not supported for automatic-speech-recognition.
```
cc @Narsil | 10-27-2021 14:00:49 | 10-27-2021 14:00:49 | Was there any real failure or just a warning ?
If warning it's fine, if failure we need a tests (model type checking should not be a hard error anyway)<|||||>@Narsil just a warning in the logs, the model was loading fine |
transformers | 14,177 | closed | Add more missing models to models/__init__.py | # What does this PR do?
After #14151, I just checked if there are other missing models - and added them in this PR.
There will be some issues showing up that are previously undetected.
## Who can review?
@sgugger @LysandreJik | 10-27-2021 13:26:50 | 10-27-2021 13:26:50 | Could you fix the issues it generates? You should see them locally with `make quality`<|||||>> Could you fix the issues it generates? You should see them locally with `make quality`
Sure, will do later. <|||||>The issues are fixed - I leave Hugging Face to double check with the original model authors if the reason `# Building part of bigger (tested) model.` are valid here.
For `UniSpeechSatForPreTraining`, it needs a reason for it to be in
https://github.com/huggingface/transformers/blob/96d1b3c76224e0f923c6e8c8257df95f8156b81a/utils/check_repo.py#L75-L80
In general, it would be great if this situation can be avoided (i.e. check if the models in `models/__init__.py` corresponds to `src/models/`), so `make quality` will fail if the necessary tests are not provided or exceptions not specified in special lists.
<|||||>I add a new check `check_model_list` in `check_repo.py` in order to avoid the missing models.
For example, if `vision_encoder_decoder` is not included in `src/transformers/models/__init__.py`, the test would fail with the following message
```
Checking all models are included.
Traceback (most recent call last):
File "C:\Users\33611\Desktop\Projects\transformers-dev-2\transformers\utils\check_repo.py", line 600, in <module>
check_repo_quality()
File "C:\Users\33611\Desktop\Projects\transformers-dev-2\transformers\utils\check_repo.py", line 587, in check_repo_quality
check_model_list()
File "C:\Users\33611\Desktop\Projects\transformers-dev-2\transformers\utils\check_repo.py", line 168, in check_model_list
raise Exception(
Exception: The following models should be included in src/transformers\models/__init__.py: vision_encoder_decoder.
make[1]: *** [Makefile:40: extra_quality_checks] Error 1
make[1]: Leaving directory '/c/Users/33611/Desktop/Projects/transformers-dev-2/transformers'
make: *** [Makefile:50: quality] Error 2
```<|||||>> As an aside, is there a reason `bort` is excluded?
Glad it will be helpful.
`bort` only contains a checkpoint conversion script.
If I include it, then we also need to have `bort` included in `models/__init__.py` (otherwise the test will fail).
```
from . import (
albert,
...
bort
...
```
but this gives a warning `unused import statement from...` in PyCharm IDE. (I don't have a clear idea about why).
I can add it though.
**Update**
Well, I just realized that it is fine to remove `_ignore_models = ["bort"]`, because of having a condition `"__init__.py" in os.listdir(model_dir)`:
https://github.com/huggingface/transformers/blob/f55ee867107d2b69f2be0b06b1b6c281acb2bfe7/utils/check_repo.py#L160-L161 |
transformers | 14,176 | closed | IBert Problems of hugging face pretrained | ### Pretrained the IBert
I want to test IBERT's, and I have done exactly what is said in https://huggingface.co/kssteven/ibert-roberta-base. For the quantization part, when I set quant_mode to true and run the evaluation again, I get a much low accuracy model. What am I doing wrong? | 10-27-2021 12:47:00 | 10-27-2021 12:47:00 | cc @kssteven418 <|||||>{
"_name_or_path": "kssteven/ibert-roberta-base",
"architectures": [
"IBertForSequenceClassification"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"finetuning_task": "mrpc",
"force_dequant": "none",
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "ibert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"quant_mode": true,
"tokenizer_class": "RobertaTokenizer",
"transformers_version": "4.4.0.dev0",
"type_vocab_size": 1,
"vocab_size": 50265
}
==================================
I set teh quant_mode:true and processed the "Integer-only finetuning".
The result shows:
SequenceClassifierOutput(loss=None, logits=tensor([[ 0.0006, -0.0006]], grad_fn=<AddmmBackward0>), hidden_states=None, attentions=None)
tensor([[0.5003, 0.4997]], grad_fn=<SoftmaxBackward0>)
The model has bad performance
==================================
<|||||>@kssteven418<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I met the same issue on squad dataset.<|||||>## Poor accuracy after finetuning
I follow the exact instruction in the [model card ](https://huggingface.co/kssteven/ibert-roberta-base)
`python examples/text-classification/run_glue.py \
--model_name_or_path kssteven/ibert-roberta-base \
--task_name MRPC \
--do_eval \
--do_train \
--evaluation_strategy epoch \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--save_steps 115 \
--learning_rate 2e-5 \
--num_train_epochs 10 \
--output_dir $OUTPUT_DIR`
For finetuning on MRPC and then
`python examples/pytorch/text-classification/run_glue.py \
--model_name_or_path ../output_dir/checkpoint-575/ \
--task_name MRPC \
--do_eval \
--do_train \
--evaluation_strategy epoch \
--max_seq_length 128 \
--per_device_train_batch_size 8 \
--save_steps 115 \
--learning_rate 1e-6 \
--num_train_epochs 10 \
--output_dir $OUTPUT_DIR`
for quantization-aware finetuning. Note that I lowered batch size since after enabling quantization my GPU memory could not handle the batch size of 32...
Result is horrible accuracy:
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|:--------------:|
| No log | 1.0 | 230 | 0.6933 | 0.3162 | 0.0 | 0.1581 | |
transformers | 14,175 | closed | [Gradient checkpointing] Enable for Deberta + DebertaV2 + SEW-D | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds `gradient_checkpointing` for DebertaV2 and thus also enables it for SEW-D.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-27-2021 12:28:41 | 10-27-2021 12:28:41 | |
transformers | 14,174 | closed | Add DistilHuBERT | # What does this PR do?
This adds the DistilHuBERT conversion script and integration tests.
Paper: [DistilHuBERT: Speech Representation Learning by Layer-wise Distillation of Hidden-unit BERT](https://arxiv.org/abs/2110.01900)
The original `s3prl` implementation and weights: https://github.com/s3prl/s3prl/tree/master/s3prl/upstream/distiller | 10-27-2021 12:12:24 | 10-27-2021 12:12:24 | Feel free to merge after fixing the last tests |
transformers | 14,173 | closed | How to train bilingual models? | In [language_modeling](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling) examples, there is an script to train a T5 model for a language. I would like to know how can I use or change this script to train a bilingual language for X and Y languages? Is it possible to do if via this code?
Also I would like to know if it's possible to initialize training from a multilingual model like `mt5`, and then use training data for one or two languages for further training? | 10-27-2021 11:20:51 | 10-27-2021 11:20:51 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,172 | closed | Clarify QA examples | # What does this PR do?
It clarifies a bit the difference between the various example scripts for question-answering. It also removes some of the things related to the legacy scripts (which were still included in the README) to the legacy folder.
Related to #14170 | 10-27-2021 10:09:45 | 10-27-2021 10:09:45 | |
transformers | 14,171 | closed | SEW - Masked Spec errors out in training | To reproduce:
```python
#!/usr/bin/env python3
from transformers import AutoModelForCTC
import torch
model = AutoModelForCTC.from_pretrained("asapp/sew-small-100k", mask_time_prob=0.2).train()
model(torch.tensor([10000 * [1.0]]))
``` | 10-27-2021 09:20:01 | 10-27-2021 09:20:01 | Issue is related to the missing projection layer, resolved by https://github.com/huggingface/transformers/pull/14158 |
transformers | 14,170 | closed | The pytorch example question-answering/run_qa_beam_search.py do not work | ## Environment info
- `transformers` version: git+https://github.com/huggingface/transformers
- Platform:
- Python version: 3.8
- PyTorch version (GPU?): 1.10.0
- Using GPU in script?: yes
### Who can help
@pvl @vanpelt @NielsRogge @sgugger
Models:
- T5: gsarti/it5-base
- encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): gsarti/it5-base
- Pytorch: 1.10.0
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
HF projects:
- datasets: [squad-it](https://huggingface.co/datasets/z-uo/squad-it), adapted from [github squad-it](https://github.com/crux82/squad-it)
Examples:
- maintained examples (not research project or legacy): [question-answering/run_qa_beam_search.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa_beam_search.py)
## Information
Model I am using (Bert, XLNet ...): T5
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. clone code: `git clone https://gitlab.com/nicolalandro/qandatrain.git` (in this repo I copy the official files for train and I fix the lib in requirements)
2. go into the code folder: `cd qandatrain`
3. install requirements: `pip install -r requirements.txt`
4. clone dataset: `git clone https://huggingface.co/datasets/z-uo/squad-it`
5. run the code:
```
python src/run_qa_beam_search.py \
--model_name_or_path gsarti/it5-base \
--tokenizer_name gsarti/it5-base \
--dataset_name squad \
--train_file "squad-it/SQuAD_it-train_processed.json" \
--validation_file "squad-it/SQuAD_it-test_processed.json" \
--do_train \
--do_eval \
--per_device_train_batch_size 3 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir it5-squad
```
6. you obtain the following error:
```
...
Traceback (most recent call last):
File "src/run_qa_beam_search.py", line 696, in <module>
main()
File "src/run_qa_beam_search.py", line 454, in main
train_dataset = train_dataset.map(
File "/media/mint/Barracuda/Project/qandatrain/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2036, in map
return self._map_single(
File "/media/mint/Barracuda/Project/qandatrain/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 503, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/media/mint/Barracuda/Project/qandatrain/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 470, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/media/mint/Barracuda/Project/qandatrain/venv/lib/python3.8/site-packages/datasets/fingerprint.py", line 406, in wrapper
out = func(self, *args, **kwargs)
File "/media/mint/Barracuda/Project/qandatrain/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2404, in _map_single
batch = apply_function_on_filtered_inputs(
File "/media/mint/Barracuda/Project/qandatrain/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2291, in apply_function_on_filtered_inputs
function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
File "/media/mint/Barracuda/Project/qandatrain/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1991, in decorated
result = f(decorated_item, *args, **kwargs)
File "src/run_qa_beam_search.py", line 386, in prepare_train_features
cls_index = input_ids.index(tokenizer.cls_token_id)
ValueError: 32005 is not in list
```
It seams an error on the tokenizer that do not find some token on the dictionary or into the sentences.
## Expected behavior
Train the T5 model for question answering on squad-it and create the trained model files at output_dir
| 10-27-2021 07:45:16 | 10-27-2021 07:45:16 | > @arfon @pvl @vanpelt @karthikrangasai
Unsure why I received a ping here. Any clues people of Hugging Face?<|||||>> Unsure why I received a ping here. Any clues people of Hugging Face?
Github suggest you to me, If you cansusggest someone else, I can change.
<|||||>@sgugger and @NielsRogge should be able to help.<|||||>Hi,
The `run_qa_beam_search.py` and `run_qa_beam_search_no_trainer.py` scripts are only meant for XLNet (which is a special, encoder-only model).
T5 is an encoder-decoder (seq2seq) model, which solves question-answering in a different way, as it's a generative model, rather than a discriminative one. It learns to generate the correct answer from a question + context, instead of predicting the position of the start and end token of the answer.
You can take a look at the new `run_seq2seq_qa.py` [script](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py) which was added 2 days ago.<|||||>Thank you for the explanation! the train now is started with the correct script, I will put the results on Hugging face if it will be good.<|||||>During the eval I have this error:
```
Traceback (most recent call last):
File "src/run_seq2seq_qa.py", line 638, in <module>
main()
File "src/run_seq2seq_qa.py", line 597, in main
metrics = trainer.evaluate(max_length=max_length, num_beams=num_beams, metric_key_prefix="eval")
File "/media/mint/Barracuda/Project/qandatrain/venv/lib/python3.8/site-packages/transformers/trainer_seq2seq.py", line 75, in evaluate
return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
File "/media/mint/Barracuda/Project/qandatrain/venv/lib/python3.8/site-packages/transformers/trainer.py", line 2113, in evaluate
output = eval_loop(
File "/media/mint/Barracuda/Project/qandatrain/venv/lib/python3.8/site-packages/transformers/trainer.py", line 2354, in evaluation_loop
metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))
File "src/run_seq2seq_qa.py", line 539, in compute_metrics
decoded_preds = [tokenizer.batch_decode(pred, skip_special_tokens=True) for pred in preds]
File "src/run_seq2seq_qa.py", line 539, in <listcomp>
decoded_preds = [tokenizer.batch_decode(pred, skip_special_tokens=True) for pred in preds]
File "/media/mint/Barracuda/Project/qandatrain/venv/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 3182, in batch_decode
return [
File "/media/mint/Barracuda/Project/qandatrain/venv/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 3183, in <listcomp>
self.decode(
File "/media/mint/Barracuda/Project/qandatrain/venv/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 3221, in decode
return self._decode(
File "/media/mint/Barracuda/Project/qandatrain/venv/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 528, in _decode
text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens)
TypeError: 'float' object cannot be interpreted as an integer
```
And if I do not limit the max_eval_sample=220 with 8GB of ram after some batch of eval it go in OOM maybe there is some memory leack.<|||||>@nicolalandro I had the same error when writing tests for the script. You should use the `--predict_with_generate` flag.<|||||>Perfect with that param The train ended correctly thank you! |
transformers | 14,169 | closed | Torch 1.10 | Authorize torch 1.10 | 10-27-2021 02:22:14 | 10-27-2021 02:22:14 | As a side note. I'm excited that label smoothing is now integrated into cross entropy! https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html
Excited to replace my current implementation with this.<|||||>@anton-l @patrickvonplaten @Narsil there's the following error when passing to torch 1.10:
```
udioClassificationPipelineTests.test_small_model_pt _____________
[gw0] linux -- Python 3.7.12 /usr/local/bin/python
self = <tests.test_pipelines_audio_classification.AudioClassificationPipelineTests testMethod=test_small_model_pt>
@require_torch
def test_small_model_pt(self):
model = "anton-l/wav2vec2-random-tiny-classifier"
audio_classifier = pipeline("audio-classification", model=model)
audio = np.ones((8000,))
output = audio_classifier(audio, top_k=4)
self.assertEqual(
nested_simplify(output, decimals=4),
[
{"score": 0.0843, "label": "on"},
{"score": 0.0840, "label": "left"},
{"score": 0.0837, "label": "off"},
> {"score": 0.0835, "label": "yes"},
],
)
E AssertionError: Lists differ: [{'score': 0.0842, 'label': 'no'}, {'score': 0.0838, 'lab[77 chars]ht'}] != [{'score': 0.0843, 'label': 'on'}, {'score': 0.084, 'labe[77 chars]es'}]
E
E First differing element 0:
E {'score': 0.0842, 'label': 'no'}
E {'score': 0.0843, 'label': 'on'}
E
E - [{'label': 'no', 'score': 0.0842},
E ? - ^
E
E + [{'label': 'on', 'score': 0.0843},
E ? + ^
E
E - {'label': 'up', 'score': 0.0838},
E ? ^^ ^^
E
E + {'label': 'left', 'score': 0.084},
E ? ^^^^ ^
E
E - {'label': 'go', 'score': 0.0837},
E ? -
E
E + {'label': 'off', 'score': 0.0837},
E ? ++
E
E - {'label': 'right', 'score': 0.0834}]
E ? ^^^^^ ^
E
E + {'label': 'yes', 'score': 0.0835}]
```
Do you know where it might come from?<|||||>@LysandreJik I can't track down which commit made the outputs different yet. The checkpoint certainly didn't change. Is it only torch 1.10 that gets affected?<|||||>It seems so! |
transformers | 14,168 | closed | Add documentation for multi-label classification | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Works toward fixing #9772
The `problem_type="multi_label_classification"` in [PretrainedConfig](https://huggingface.co/transformers/main_classes/configuration.html#transformers.PretrainedConfig) exist for many models; however, I added documentation for DistilBert only for now. If what I've done so far looks good, I'll add the same to the remaining models; otherwise I need some guidance.
Also, I have added a notebook with a full example in huggingface/notebooks#102
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
@LysandreJik and @sgugger may be interested in this review.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-27-2021 00:26:26 | 10-27-2021 00:26:26 | I don't think this is the right way to fix it: the docstring you have written is great, but it should be added in the base sequence classification docstring, as all models with a sequence classification head should accept `problem_type`. I will fix the models that do not deal with it and then you can amend your PR, does that sound right?<|||||>Thanks @sgugger !
Sure - by the "base sequence classification docstring" you mean this?
https://github.com/huggingface/transformers/blob/master/src/transformers/file_utils.py#L828
https://github.com/huggingface/transformers/blob/master/src/transformers/file_utils.py#L1273
Happy to help you fixing the models that do not currently support `problem_type` if you like. I took the list of models that should support it from https://github.com/huggingface/transformers/blob/master/src/transformers/configuration_utils.py#L181.
<|||||>Yes, I meant that docstring.
The PR to enable `problem_type` on all sequence classification models is #14180 , which will hopefully be merged soon :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @sgugger,
Modified as suggested - hope this helps.
Best,
Giacomo |
transformers | 14,167 | closed | Fix gelu test for torch 1.10 | Torch 1.10 introduced a change in the way gelu is computed, resulting in a 1e-9 to 1e-10 difference in results.
This checks that the results are close enough. | 10-26-2021 22:26:44 | 10-26-2021 22:26:44 | Sounds good! |
transformers | 14,166 | closed | fix typos in error messages in speech recognition example and modelcard.py | # What does this PR do?
This PR fixes two minor typographical issues
1 - a typo in an example speech recognition script that misstates which columns is missing from the input data
2 - a typo in the modelcard.py function that explains when some field is missing from a dataset
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
## Who can review?
@patrickvonplaten per git blame, but realistically anyone
@sgugger | 10-26-2021 20:21:15 | 10-26-2021 20:21:15 | |
transformers | 14,165 | closed | Remove n_ctx from configs | # What does this PR do?
Remove `n_ctx` from configs as it's just a duplicate for `n_positions`.
GPTJ was left unchanged because it has a linear layer that depends on `n_ctx`. I'm unclear why, is it a bug and the author meant `n_embed`?
- GPTJ: https://github.com/huggingface/transformers/blob/master/src/transformers/models/gptj/modeling_gptj.py#L857
- GPT2: https://github.com/huggingface/transformers/blob/master/src/transformers/models/gpt2/modeling_gpt2.py#L1331
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten @stas00
| 10-26-2021 16:22:51 | 10-26-2021 16:22:51 | yo, you probably meant to tag @patrickvonplaten :) <|||||>Oops! Thanks!<|||||>Thank you, Thomas!
I haven't had a chance to confirm that it's truly identical. Perhaps someone could do it since I won't be able to do that now.
But a question how are we going to deal with backward compatibly if we are removing a config key? Assume that `n_positions` must be there and therefore it should just work?
@sgugger, what do you think? <|||||>In terms of backward compatibility, it won't change anything to have this attribute or not: the config will still set the `n_ctx` attribute to the value inside the `config.json` on the model repo (if it is defined there). If it's not use anywhere in a model's code, it can be removed IMO.
cc @LysandreJik <|||||>what about the function arguments? this is part of this PR:
```
- def __init__(self, nx, n_ctx, config, scale=False)
+ def __init__(self, nx, n_positions, config, scale=False)
```<|||||>Changing the inside blocks is fine I think since those are internal, although I'm not sure the slight breaking change is worth it. However changing the config arguments used here and there(e.g. using `config.n_positions` instead of `config.n_ctx`) is not fine (I had not noticed it on my first pass) unless we can guarantee that every model on the hub uses the same values for `n_ctx` and `n_positions`.
All in all not sure this is worth changing anything compared to what we are potentially breaking.<|||||>So that means that we will never be able to clean it up, not even in the proverbial v5.
I suppose in v5 we could add an assert that if `config.json` has `n_ctx` and it's not the same as `n_positions` then we alert the user that `n_ctx` is deprecated and use `n_positions` instead?
or alternatively perhaps asking a different question - is it even possible for a config to have different values for `n_ctx` and `n_positions` - if that code won't work by definition - i.e. there will be a certain mismatch error, then it should be safe to assume that `n_ctx` and `n_positions` are identical on the hub. I haven't validated that it is so, but if it is then...<|||||>A major release won't change anything, as breaking changes to existing models on the Hub will never be accepted. I have no idea why the original design model has those two attributes, so I can't answer your second question.
I guess the best path forward is to study all configs on the Hub with a script, and check if there is one with two different values for those attributes. If there are none, we can proceed with the PR as it is.<|||||>Agreed!
Is there an easy way to download all config files? does hub have a sort of index with all files and their locations? (assuming not in LFS as it's a tiny file)<|||||>I think @patrickvonplaten might have a script to help on this :-) <|||||>Okey as far as I see the only model where anything could potentially break is `modeling_openai`. I'm happy to scan the Hub on whether there are any models out there where this could break. Will run the script tomorrow and let you know.<|||||>GPT2 can be cleaned up for sure IMO `n_ctx` is never actually used there<|||||>Okay so the only "breaking change" config wise are: GPTJForSequenceClassificationModel, OpenAIGPT* models.
I scanned the hub, for configs where `n_ctx != n_positions` and there was none (doesn't mean there are any in the wild I guess ...)
Subblocks are not considered as breaking change. cc @patrickvonplaten <|||||>Script used to scan the hub.
```python
import argparse
from typing import Tuple
from transformers import AutoConfig
import json
from multiprocessing import Pool
def get_args():
parser = argparse.ArgumentParser()
# Required parameters
parser.add_argument(
"--model-ids-file", default=None, type=str, required=True, help="Path to the json file containing all models ids."
)
parser.add_argument(
"--procs", default=1, type=int, help="Number of processes."
)
return parser.parse_args()
def check_config(model_id) -> Tuple[str, bool, str]:
model_id = model_id.strip()
try:
config = AutoConfig.from_pretrained(model_id)
except:
return model_id, False, f"{model_id} cannot load config"
if isinstance(config.architectures, list) and len(config.architectures) > 0:
# here need to check for all `OpenAIGPT...` class names and filter
# for architecture in config.architectures:
# if architecture == "GPTJForSequenceClassification":
# if config.n_ctx != config.n_positions:
# return model_id, True, f"{config.model_type}, n_ctx != n_positions with n_ctx={config.n_ctx} n_positions={config.n_positions}"
# return model_id, False, f"No architecture matched GPTJForSequenceClassification or all have matching n_ctx n_positions"
for architecture in config.architectures:
if architecture.startswith("OpenAIGPT"):
if config.n_ctx != config.n_positions:
return model_id, True, f"{config.model_type}, n_ctx != n_positions with n_ctx={config.n_ctx} n_positions={config.n_positions}"
return model_id, False, f"No architecture matched GPTJForSequenceClassification or all have matching n_ctx n_positions"
# model_type_filter = ["gpt", "gpt2", "ctrl"]
# if config.model_type in model_type_filter:
# if config.n_ctx != config.n_positions:
# return model_id, True, f"{config.model_type}, n_ctx != n_positions with n_ctx={config.n_ctx} n_positions={config.n_positions}"
# return model_id, False, f"{config.model_type}, n_ctx == n_positions with n_ctx={config.n_ctx} n_positions={config.n_positions}"
# return model_id, False, f"{config.model_type} not in {model_type_filter}"
else:
return model_id, False, f"{model_id} is BAD!"
def main():
args = get_args()
with open(args.model_ids_file, "r") as f:
lines = json.load(f)
model_ids = [line["modelId"] for line in lines]
print(model_ids)
if args.procs > 1:
pool = Pool(args.procs)
model_ids_and_reasons = pool.imap(check_config, model_ids)
else:
model_ids_and_reasons = [check_config(model_id) for model_id in model_ids]
all_matches = []
for i, model_ids_and_reason in enumerate(model_ids_and_reasons):
if i % 1000 == 0:
print(i)
if model_ids_and_reason[1] is False:
continue
else:
all_matches.append(model_ids_and_reason)
for match in all_matches:
print(f"{match[0]} is not safe {match[2]}")
if __name__ == "__main__":
main()
```<|||||>Thanks a lot for checking the existing configs @patrickvonplaten !<|||||>Could we please save the scanning scripts
under https://github.com/huggingface/transformers/tree/master/scripts
and how do we get the list of models that the script takes? Thank you!<|||||>Tests fails currently on master as well. Think this needs to be fixed and then the PR should be rebased :-)<|||||>> Tests fails currently on master as well. Think this needs to be fixed and then the PR should be rebased :-)
FWIW, run_tests_hub CI is currently broken - the failure is not related to this PR.
<|||||>Reran tests, seem to pass now. Previous failures seemed unrelated to this PR. Merging then. |
transformers | 14,164 | closed | Extracting Neutral sentiment from Hugginface model | Hi, I am using Hugging-face pipeline for the sentiment analysis task, which gives me Positive/Negative sentiment along with a confidence score. In my case, I need three outputs (Positive/Neutral/Negative). The problem is that hugging-face is giving me high confidence score even with neutral sentences (such as : 'He have she has')? Any suggestions?
from transformers import pipeline
model = pipeline(task = 'sentiment-analysis')
sentence = 'some text to evaluate'
predicted = model(sentence)
print(predicted)
Here are some output samples:
----------------------------------------------
sentence = 'I love you'
predicted = model(sentence)
predicted
[{'label': 'POSITIVE', 'score': 0.9998656511306763}]
----------------------------------------------
sentence = 'I hate you'
predicted = model(sentence)
predicted
[{'label': 'NEGATIVE', 'score': 0.9991129040718079}]
----------------------------------------------
sentence = 'I have she had'
predicted = model(sentence)
predicted
[{'label': 'POSITIVE', 'score': 0.9821817874908447}]
----------------------------------------------
sentence = 'I go to work'
predicted = model(sentence)
predicted
[{'label': 'POSITIVE', 'score': 0.9457777738571167}]
----------------------------------------------
sentence = 'This movie was actually neither that funny, nor super witty.'
predicted = model(sentence)
predicted
[{'label': 'NEGATIVE', 'score': 0.9997298121452332}]
| 10-26-2021 16:17:50 | 10-26-2021 16:17:50 | Hello @banyous! The pipeline can be configured to use any model on the hub that works with this task. If the current model isn't working well for you, I invite you to take a look at the different models on the model hub and select one that fits your use-case: https://huggingface.co/models<|||||>Thanks @LysandreJik <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,163 | closed | LayoutLMv2 functionality. | Support for new LayoutLMv2 as discussed in this issue: https://github.com/huggingface/transformers/issues/14160.
Namely, the addition of `LayoutLMv2ForMaskedLM`, which adds a language modeling head on top of the base model to replicate the first pre-training objective in the original paper. | 10-26-2021 15:30:19 | 10-26-2021 15:30:19 | Hi,
This is actually not the implementation, I see you just added text to a README file. If you want to add `LayoutLMv2ForMaskedLM`, you should implement it in `modeling_layoutlmv2.py` which can be found [here](https://github.com/huggingface/transformers/blob/master/src/transformers/models/layoutlmv2/modeling_layoutlmv2.py). |
transformers | 14,162 | closed | Decoding Large Audio Files Using Wav2Vec2ForCTC Model | ERROR: type should be string, got "\r\nhttps://discuss.huggingface.co/t/decding-large-audio-files-using-wav2vec2forctc-model/11097\r\n\r\nIβve been working on Wav2Vec2ForCTC model for a while. I used to have small audio files, i.e., audio files with relatively short durations (~ 1 min). When I tested the model on a large file (~ 14 mins), the model could not handle it in GPU, so, I shifted to use CPU. I notices that it used more than 200 GB of RAM to decode! Iβve tried to split the audio file into smaller audio files and use hidden states to link them together while decoding each segment but I could not find a way to feed the hidden states of the current audio file to the next audio file for the model to use it while decoding!\r\n\r\nAny ideas or suggestions?\r\n@patrickvonplaten, @anton-l\r\n" | 10-26-2021 15:28:07 | 10-26-2021 15:28:07 | Hi @farisalasmary!
Unfortunately, there's no good way to use Wav2Vec-type models on large audio clips. Since it uses a bidirectional transformer as a context encoder, you can't feed hidden states between sequences, like in the autoregressive models.
But if you split the input into chunks with some additional padding (for better context), you can then concatenate the output character ids and decode them in one go, like so:
```python
import torch
import librosa
from transformers import AutoModelForCTC, Wav2Vec2Processor
device = "cuda"
model_path = "facebook/wav2vec2-base-960h"
sample_rate = 16000
model = AutoModelForCTC.from_pretrained(model_path).to(device)
processor = Wav2Vec2Processor.from_pretrained(model_path)
audio, _ = librosa.load("long_audio_file.wav", sr=sample_rate)
chunk_duration = 5 # sec
padding_duration = 1 # sec
chunk_len = chunk_duration*sample_rate
input_padding_len = int(padding_duration*sample_rate)
output_padding_len = model._get_feat_extract_output_lengths(input_padding_len)
all_preds = []
for start in range(input_padding_len, len(audio)-input_padding_len, chunk_len):
chunk = audio[start-input_padding_len:start+chunk_len+input_padding_len]
input_values = processor(chunk, sampling_rate=sample_rate, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values.to(device)).logits[0]
logits = logits[output_padding_len:len(logits)-output_padding_len]
predicted_ids = torch.argmax(logits, dim=-1)
all_preds.append(predicted_ids.cpu())
transcription= processor.decode(torch.cat(all_preds))
```
Note that this snippet isn't well-tested and could contain some off-by-one bugs, but it should give you the general idea :slightly_smiling_face:
However, we are working on integrating a native streaming inference solution into `transformers`, stay tuned for future updates! <|||||>Hi, @anton-l
Thank you for your reply!
I'll try to use your approach and see what will happen. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>hey @anton-l, thank you for your code, it doesn't handle the first and last chunks properly though, do you think its better to add empty audio chunks in the beginning and the end of an audio or just not use start padding in the 1st chunk and end padding in the last chunk? |
transformers | 14,161 | closed | [Speech Recognition] - Distributed training: Make sure vocab file removal and creation don't interfer | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Correct vocab dict issue with mulit-processing
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-26-2021 13:31:56 | 10-26-2021 13:31:56 | |
transformers | 14,160 | closed | Support for new LayoutLMv2 in Auto Classes (specifically AutoModelForMaskedLM) | # π Feature request
I am trying to finetune a LayoutLMv2 language model on my dataset using the Auto Class functionality, namely AutoModelForMaskedLM. However, it does not support this model. (https://huggingface.co/transformers/model_doc/auto.html#automodelformaskedlm)
## Motivation
It is hard to develop a streamlined and consistent pipeline if I have to jump back and forth between ways to finetune depending on the model I use. | 10-26-2021 12:43:32 | 10-26-2021 12:43:32 | As the authors did not release any pretraining code, we did not implement a `LayoutLMv2ForMaskedLM`, as LayoutLMv2 was actually pre-trained on 3 tasks:
* Masked Visual-Language Modeling
* Text-Image Alignment
* Text-Image Matching
Of course, we could add a `LayoutLMv2ForMaskedLM`, which adds a language modeling head on top of the base model. This could be used to replicate the first pre-training objective. Feel free to open a PR if you would like this to be added.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,159 | closed | run_speech_recognition_ctc.py throwing error when using own dataset | ## Environment info
- `transformers` version: 4.12.0.dev0
- Platform: Linux-5.4.0-1055-aws-x86_64-with-glibc2.10
- Python version: 3.8.12
- PyTorch version (GPU?): 1.10.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten
Models:
- Wav2Vec2, HuBERT, SpeechEncoderDecoder: @patrickvonplaten, @anton-l
## Information
I am trying to use the recently added script run_speech_recognition_ctc.py to finetune facebook/wav2vec2-large-xlsr-53 as per the documentation.
The problem arises when using:
If I use the official example script the training works as expected. However if I change the dataset_name argument so that it uses my own dataset I get a pyarrow.lib.ArrowNotImplementedError error.
The tasks I am working on is:
* ASR using my own dataset: (give details below)
I am trying to fine tune using my own dataset "pete/autonlp-data-tas_pa_model" which is on the model hub. This dataset has been used by AutoNLP in the past successfully.
## To reproduce
Steps to reproduce the behavior:
Run run_speech_recognition_ctc.py as shown below. The only parameter that has been changed from the demo (https://github.com/huggingface/transformers/tree/master/examples/pytorch/speech-recognition) is the dataset name. If I change the dataset_name to --dataset_name="common_voice" the training works.
```python
python run_speech_recognition_ctc.py \
--dataset_name="pete/autonlp-data-tas_pa_model" \
--model_name_or_path="facebook/wav2vec2-large-xlsr-53" \
--dataset_config_name="processed" \
--output_dir="./models" \
--overwrite_output_dir \
--num_train_epochs="15" \
--per_device_train_batch_size="16" \
--gradient_accumulation_steps="2" \
--learning_rate="3e-4" \
--warmup_steps="500" \
--evaluation_strategy="steps" \
--text_column_name="text" \
--save_steps="400" \
--eval_steps="100" \
--layerdrop="0.0" \
--save_total_limit="3" \
--freeze_feature_extractor \
--gradient_checkpointing \
--chars_to_ignore , ? . ! - \; \: \" β % β β οΏ½ \
--fp16 \
--group_by_length \
--push_to_hub \
--do_train --do_eval
```
I get the following error
```python
Traceback (most recent call last):
File "run_speech_recognition_ctc.py", line 615, in <module>
main()
File "run_speech_recognition_ctc.py", line 336, in main
raw_datasets["train"] = load_dataset(
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/load.py", line 1627, in load_dataset
builder_instance.download_and_prepare(
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare
self._download_and_prepare(
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/builder.py", line 697, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/builder.py", line 1159, in _prepare_split
writer.write_table(table)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/arrow_writer.py", line 428, in write_table
pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)
File "pyarrow/table.pxi", line 1596, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 592, in pyarrow.lib._sanitize_arrays
File "pyarrow/array.pxi", line 329, in pyarrow.lib.asarray
File "pyarrow/table.pxi", line 277, in pyarrow.lib.ChunkedArray.cast
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.8/site-packages/pyarrow/compute.py", line 297, in cast
return call_function("cast", [arr], options)
File "pyarrow/_compute.pyx", line 527, in pyarrow._compute.call_function
File "pyarrow/_compute.pyx", line 337, in pyarrow._compute.Function.call
File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 120, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Unsupported cast from struct<train: struct<name: string, num_bytes: int64, num_examples: int64, dataset_name: null>> to list using function cast_list
```
## Expected behavior
The model should be fine tuned using the test and training data in the pete/autonlp-data-tas_pa_model dataset. Note the dataset is private, I had to download the dataset in order for the script to run.
| 10-26-2021 12:31:20 | 10-26-2021 12:31:20 | Hey @peterhanlon,
Thanks for the issue. In your case it seems like you have the dataset already more or less processed, so that the script won't work out of the box.
You will have to adapt the beginning of the script so that it removes all data preprocessing code and simply loads your processed data.
Could you try taking a look at this script: https://huggingface.co/ami-wav2vec2/wav2vec2-base-ami_multi-nithin3/blob/main/run_speech_recognition_ctc.py which does more or less the same thing as you intend to do (I believe) and see whether this would work for you? :-)<|||||>Thanks so much for the response @patrickvonplaten, I will give that a go :)
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,158 | closed | Add SEW CTC models | # What does this PR do?
This adds the conversion steps and bugfixes to support finetuned SEW and SEW-D checkpoints (https://github.com/asappresearch/sew#asr-model-fine-tuned-on-librispeech-train-clean-100h)
## TODO
- [ ] Add model cards with code examples and WER results
- [x] Update unsupervised models' weights
| 10-26-2021 11:57:49 | 10-26-2021 11:57:49 | |
transformers | 14,157 | closed | [Trainer] Push to hub takes too much space for local `.git` folder | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.12.0.dev0
- Platform: Linux-5.3.0-64-generic-x86_64-with-glibc2.10
- Python version: 3.8.10
- PyTorch version (GPU?): 1.8.1 (True)
- Tensorflow version (GPU?): 2.6.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.3.4 (cpu)
- Jax version: 0.2.19
- JaxLib version: 0.1.70
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
## Information
When using the `--push_to_hub` functionality each commit writes memory to the `.git` folder and thus increasing the used space significantly. For a model of the size of `bert-base-cased`, the `.git` folder quickly goes up to 10 GB for a short & simple training. This can quickly lead to hard disk errors.
## Expected behavior
Everytime `push_to_hub(...)` is called it should be made sure that the `.git` folder is "cleaned" so that it doesn't contain all checkpoints of previous commits.
## To reproduce
Do the following:
1. $ mkdir test
2. $ ln -s $(path/to/transformers/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py) ./
3. create a `run_dummy.sh` file with the following code:
4.
```bash
CUDA_VISIBLE_DEVICES="0" python run_speech_recognition_ctc.py \
--dataset_name="timit_asr" \
--model_name_or_path="patrickvonplaten/wav2vec2-base-repro-960h-libri-85k-steps" \
--overwrite_output_dir \
--output_dir="./dummy_run" \
--train_split_name="train" \
--num_train_epochs="1" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="1" \
--weight_decay="0.005" \
--learning_rate="1e-4" \
--text_column_name="text" \
--save_steps="10" \
--logging_steps="1" \
--layerdrop="0.0" \
--save_total_limit="3" \
--freeze_feature_extractor \
--fp16 \
--push_to_hub \
--do_train
```
5. Run `bash run_dummy.sh`
Now depending on your upload speed muitiple model weights will be uploaded to the repository on the hub. However, for each commit the model checkpoints are also saved locally in the `.git` folder. E.g. running the above steps gives me this automatically created repo on the hub: https://huggingface.co/patrickvonplaten/dummy_run/commits/main . As you can see there are 7 commits with 6 commits uploading model checkpoints. Now my local `.git` folder has a size of 2.1 GB containing exactly 6 `.git/lfs/objects` each having the size of one model checkpoint (360MB). => So this means that every commit is written to the hard disk.
This can be very problematic for people with limited disk as the `.git` folder just accumulates saved checkpoints. It essentially makes it impossible to do large model pretraining. I think we should make sure that after each commit the `.git` folder is somewhat cleaned so that it's essentially empty. | 10-26-2021 11:53:22 | 10-26-2021 11:53:22 | The above script runs in like 5 minutes on a single GPU for reproducibility. <|||||>Sorry, I left out an important part: I'm using `huggingface_hub` version `'0.0.19'`.<|||||>Seems like it's something that should be fixed on the `huggingface_hub` side at a first glance. cc @LysandreJik <|||||>Running `git lfs prune` will probably help
https://github.com/git-lfs/git-lfs/blob/main/docs/man/git-lfs-prune.1.ronn
<|||||>Thanks @SBrandeis !
@LysandreJik could we have this as a method on `Repository`? This way I can call it in the `Trainer`. <|||||>We can add it as a method and I'll see if it makes sense to add it as a keyword argument for certain methods, too, like `git_push` or the `commit` context manager. Will work on a PR today or tomorrow.<|||||>Still having this problem with `master` and `huggingface_hub == 0.1.0` - is this expected?<|||||>Yes the option is opt-in and has not been activated yet on the `Trainer` side.<|||||>Could you give a try to #14294 ? |
transformers | 14,156 | closed | User-defined callback can't use logging | I define my own callback class and want to log some info every `print_step`. However, the `logger.info` statement doesn't work when training. If modifying `logger.info` to `print`, it prints the content correctly. But if I set `gradient_accumulation_steps` greater than `1`, it will print `gradient_accumulation_steps` times which I don't want. I guess it may related to asynchronization, but I have no idea how to solve this problem. It would be my pleasure if anyone could help me. Thanks in adavance.
The following is my code
```python
import collections
import logging
from transformers.training_args import TrainingArguments
from transformers.trainer_callback import TrainerCallback, TrainerState, TrainerControl
logger = logging.getLogger(__name__)
class KubeLogCallback(TrainerCallback):
def __init__(self, print_step: int = 100):
self.print_step = print_step
self.eval_step = 0
def on_step_end(self, args: TrainingArguments, state: TrainerState, control: TrainerControl, **kwargs):
if state.is_local_process_zero:
if state.global_step != 0 and state.global_step % self.print_step == 0:
step_log = f"Epoch: {state.epoch}/{state.num_train_epochs} |" \
+ f" Steps: {state.global_step}/{state.max_steps} |"
# print(step_log)
logger.info(step_log)
def on_substep_end(self, args: TrainingArguments, state: TrainerState, control: TrainerControl, **kwargs):
self.on_step_end(args, state, control, **kwargs)
def on_prediction_step(self, args: TrainingArguments, state: TrainerState, control: TrainerControl,
eval_dataloader=None, **kwargs):
if eval_dataloader is not None:
if state.is_local_process_zero and isinstance(eval_dataloader.dataset, collections.abc.Sized):
self.eval_step += 1
current_eval_step = self.eval_step % len(eval_dataloader)
if current_eval_step != 0 and current_eval_step % 100 == 0:
step_log = f"Eval Steps: {current_eval_step}/{len(eval_dataloader)}"
logger.info(step_log)
``` | 10-26-2021 11:20:36 | 10-26-2021 11:20:36 | cc @sgugger <|||||>I think this is because you are not using the logger from the transformers library but a new logger. If you replace `__name__` by `"transformers"` it should work.<|||||>> I think this is because you are not using the logger from the transformers library but a new logger. If you replace `__name__` by `"transformers"` it should work.
Thanks! I modify `__name__` to `transformers` and it can log the info correctly. But if I set `gradient_accumulation_steps` greater than `1`, it still log `gradient_accumulation_steps` times which looks redundancy. Is there any good way to slove it? Thanks for your reply again!<|||||>That is because of:
```py
def on_substep_end(self, args: TrainingArguments, state: TrainerState, control: TrainerControl, **kwargs):
self.on_step_end(args, state, control, **kwargs)
```
The `on_step_end` method is supposed to only be called every gradient accumulation steps of the training data, but your callback calls it at every subset.<|||||>> That is because of:
>
> ```python
> def on_substep_end(self, args: TrainingArguments, state: TrainerState, control: TrainerControl, **kwargs):
> self.on_step_end(args, state, control, **kwargs)
> ```
>
> The `on_step_end` method is supposed to only be called every gradient accumulation steps of the training data, but your callback calls it at every subset.
OKοΌthank you! :satisfied: |
transformers | 14,155 | closed | Include Keras tensor in the allowed types | Includes Keras tensor in the allowed types. This allows propagating symbolic `keras.Input` tensors through the models' `call` method. This way we can convert any HF (subclass) model into a functional model.
@Rocketknight1
| 10-26-2021 10:13:05 | 10-26-2021 10:13:05 | @sergiovalmac looks like all the CI tests are passing except the code quality, let me run that one for you!<|||||>@sergiovalmac tests passing now, merge whenever you're happy with the PR!<|||||>Brilliant! Thank you very much! Please could you merge it? It says that only those with write access can merge PRs.<|||||>Ah, I'm sorry, I thought you should be able to once I approved it!<|||||>Thanks! :-) |
transformers | 14,154 | closed | [Speech Recognition CTC] Add auth token to fine-tune private models | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-26-2021 09:27:07 | 10-26-2021 09:27:07 | |
transformers | 14,153 | closed | how to reproduce distilbert pretrain on TF2.x? | hi ALL,
I notice that there is a script for train distilbert on PyTorch([Link](https://github.com/huggingface/transformers/blob/master/examples/research_projects/distillation/train.py)), is there same code for TF2.x for reference?
Thanks a lot. | 10-26-2021 07:03:31 | 10-26-2021 07:03:31 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,152 | closed | How to save wrapped DistilBERT without using `save_pretrained`? | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.9.2
- Platform: Ubuntu 20
- Python version: 3.8
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.6
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): @patrickvonplaten, @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj @patrickvonplaten
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten
- Tokenizers: @LysandreJik
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@Rocketknight1
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Simply run the codes below
```python
import tensorflow as tf
from transformers import (
TFDistilBertModel,
DistilBertTokenizerFast,
DistilBertConfig,
)
def build_classifier_model():
input_ids = tf.keras.layers.Input(shape=(None,), name="input_ids", dtype=tf.int32)
attention_mask = tf.keras.layers.Input(
shape=(None,), name="attention_mask", dtype=tf.int32
)
config = DistilBertConfig(
dropout=0.2,
attention_dropout=0.2,
output_attentions=True,
output_hidden_states=False,
return_dict=False,
)
transformer = TFDistilBertModel.from_pretrained(
"distilbert-base-uncased", config=config
)
transformer.trainable = False
last_hidden_state = transformer(
[input_ids, attention_mask],
)[0]
x = last_hidden_state[:, 0, :]
x = tf.keras.layers.Dense(768, activation="relu")(x)
x = tf.keras.layers.Dropout(0.2)(x)
outputs = {
label_name: tf.keras.layers.Dense(1, activation="sigmoid", name=label_name)(x)
for label_name in ['A', 'B', 'C']
}
return tf.keras.Model([input_ids, attention_mask], outputs)
model = build_classifier_model()
model.save('./dump/savedmodel')
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I expect this to generate artifacts containing the model in savedmodel format, but instead I got
```
~/miniforge3/envs/folder/lib/python3.8/site-packages/transformers/models/distilbert/modeling_tf_distilbert.py in call(self, input_ids, attention_mask, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict, training, **kwargs)
561 **kwargs,
562 ):
--> 563 inputs = input_processing(
564 func=self.call,
565 config=self.config,
~/miniforge3/envs/folder/lib/python3.8/site-packages/transformers/modeling_tf_utils.py in input_processing(func, config, input_ids, **kwargs)
376 output[tensor_name] = input
377 else:
--> 378 output[parameter_names[i]] = input
379 elif isinstance(input, allowed_types) or input is None:
380 output[parameter_names[i]] = input
IndexError: list index out of range
``` | 10-26-2021 06:20:48 | 10-26-2021 06:20:48 | I saw others posted similar issues https://github.com/huggingface/transformers/issues/13610 and https://github.com/huggingface/transformers/issues/13742. However, since I am wrapping the model in `tf.keras.Model`, `save_pretrained` isn't a viable solution. Are there any workarounds?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>This issue should be resolved by recent PRs - if you're still encountering difficulties after installing the most recent release, please reopen it and let us know!<|||||>@hardianlawi Were you able to solve this issue? I am still facing this issue, can you help?
Hi @Rocketknight1 i am still facing this issue. I am not able to save finetuned `TFDistilBertModel` model in keras with `model.save()` . Since the model is wrapped in `tf.keras.Model` I can't use `save_pretrained`.
transformers version: 4.15
Platform: Ubuntu 20
Python version: 3.8
PyTorch version (GPU?):
Tensorflow version (GPU?): 2.6.2
Using GPU in script?: Yes
Using distributed or parallel set-up in script?: Yes
Error:
```
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
/tmp/ipykernel_47167/2242234445.py in <module>
----> 1 model.save("save_path")
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/engine/training.py in save(self, filepath, overwrite, include_optimizer, save_format, signatures, options, save_traces)
2143 """
2144 # pylint: enable=line-too-long
-> 2145 save.save_model(self, filepath, overwrite, include_optimizer, save_format,
2146 signatures, options, save_traces)
2147
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/save.py in save_model(model, filepath, overwrite, include_optimizer, save_format, signatures, options, save_traces)
147 else:
148 with generic_utils.SharedObjectSavingScope():
--> 149 saved_model_save.save(model, filepath, overwrite, include_optimizer,
150 signatures, options, save_traces)
151
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/save.py in save(model, filepath, overwrite, include_optimizer, signatures, options, save_traces)
88 with K.deprecated_internal_learning_phase_scope(0):
89 with utils.keras_option_scope(save_traces):
---> 90 saved_nodes, node_paths = save_lib.save_and_return_nodes(
91 model, filepath, signatures, options)
92
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/saved_model/save.py in save_and_return_nodes(obj, export_dir, signatures, options, experimental_skip_checkpoint)
1226
1227 _, exported_graph, object_saver, asset_info, saved_nodes, node_paths = (
-> 1228 _build_meta_graph(obj, signatures, options, meta_graph_def))
1229 saved_model.saved_model_schema_version = (
1230 pywrap_libexport.SAVED_MODEL_SCHEMA_VERSION)
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/saved_model/save.py in _build_meta_graph(obj, signatures, options, meta_graph_def)
1397
1398 with save_context.save_context(options):
-> 1399 return _build_meta_graph_impl(obj, signatures, options, meta_graph_def)
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/saved_model/save.py in _build_meta_graph_impl(obj, signatures, options, meta_graph_def)
1333 checkpoint_graph_view = _AugmentedGraphView(obj)
1334 if signatures is None:
-> 1335 signatures = signature_serialization.find_function_to_export(
1336 checkpoint_graph_view)
1337
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/saved_model/signature_serialization.py in find_function_to_export(saveable_view)
97 # If the user did not specify signatures, check the root object for a function
98 # that can be made into a signature.
---> 99 functions = saveable_view.list_functions(saveable_view.root)
100 signature = functions.get(DEFAULT_SIGNATURE_ATTR, None)
101 if signature is not None:
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/saved_model/save.py in list_functions(self, obj)
161 obj_functions = self._functions.get(obj, None)
162 if obj_functions is None:
--> 163 obj_functions = obj._list_functions_for_serialization( # pylint: disable=protected-access
164 self._serialization_cache)
165 self._functions[obj] = obj_functions
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/engine/training.py in _list_functions_for_serialization(self, serialization_cache)
2810 self.predict_function = None
2811 self.train_tf_function = None
-> 2812 functions = super(
2813 Model, self)._list_functions_for_serialization(serialization_cache)
2814 self.train_function = train_function
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/engine/base_layer.py in _list_functions_for_serialization(self, serialization_cache)
3083
3084 def _list_functions_for_serialization(self, serialization_cache):
-> 3085 return (self._trackable_saved_model_saver
3086 .list_functions_for_serialization(serialization_cache))
3087
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/base_serialization.py in list_functions_for_serialization(self, serialization_cache)
91 return {}
92
---> 93 fns = self.functions_to_serialize(serialization_cache)
94
95 # The parent AutoTrackable class saves all user-defined tf.functions, and
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/layer_serialization.py in functions_to_serialize(self, serialization_cache)
71
72 def functions_to_serialize(self, serialization_cache):
---> 73 return (self._get_serialized_attributes(
74 serialization_cache).functions_to_serialize)
75
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/layer_serialization.py in _get_serialized_attributes(self, serialization_cache)
87 return serialized_attr
88
---> 89 object_dict, function_dict = self._get_serialized_attributes_internal(
90 serialization_cache)
91
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/model_serialization.py in _get_serialized_attributes_internal(self, serialization_cache)
54 # the ones serialized by Layer.
55 objects, functions = (
---> 56 super(ModelSavedModelSaver, self)._get_serialized_attributes_internal(
57 serialization_cache))
58 functions['_default_save_signature'] = default_signature
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/layer_serialization.py in _get_serialized_attributes_internal(self, serialization_cache)
97 """Returns dictionary of serialized attributes."""
98 objects = save_impl.wrap_layer_objects(self.obj, serialization_cache)
---> 99 functions = save_impl.wrap_layer_functions(self.obj, serialization_cache)
100 # Attribute validator requires that the default save signature is added to
101 # function dict, even if the value is None.
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/save_impl.py in wrap_layer_functions(layer, serialization_cache)
195 for fn in fns.values():
196 if fn is not None and not isinstance(fn, LayerCall):
--> 197 fn.get_concrete_function()
198
199 # Restore overwritten functions and losses
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/contextlib.py in __exit__(self, type, value, traceback)
118 if type is None:
119 try:
--> 120 next(self.gen)
121 except StopIteration:
122 return False
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/save_impl.py in tracing_scope()
357 if training is not None:
358 with K.deprecated_internal_learning_phase_scope(training):
--> 359 fn.get_concrete_function(*args, **kwargs)
360 else:
361 fn.get_concrete_function(*args, **kwargs)
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in get_concrete_function(self, *args, **kwargs)
1231 def get_concrete_function(self, *args, **kwargs):
1232 # Implements GenericFunction.get_concrete_function.
-> 1233 concrete = self._get_concrete_function_garbage_collected(*args, **kwargs)
1234 concrete._garbage_collector.release() # pylint: disable=protected-access
1235 return concrete
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _get_concrete_function_garbage_collected(self, *args, **kwargs)
1211 if self._stateful_fn is None:
1212 initializers = []
-> 1213 self._initialize(args, kwargs, add_initializers_to=initializers)
1214 self._initialize_uninitialized_variables(initializers)
1215
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
757 self._graph_deleter = FunctionDeleter(self._lifted_initializer_graph)
758 self._concrete_stateful_fn = (
--> 759 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
760 *args, **kwds))
761
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
3064 args, kwargs = None, None
3065 with self._lock:
-> 3066 graph_function, _ = self._maybe_define_function(args, kwargs)
3067 return graph_function
3068
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
3461
3462 self._function_cache.missed.add(call_context_key)
-> 3463 graph_function = self._create_graph_function(args, kwargs)
3464 self._function_cache.primary[cache_key] = graph_function
3465
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
3296 arg_names = base_arg_names + missing_arg_names
3297 graph_function = ConcreteFunction(
-> 3298 func_graph_module.func_graph_from_py_func(
3299 self._name,
3300 self._python_function,
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes, acd_record_initial_resource_uses)
1005 _, original_func = tf_decorator.unwrap(python_func)
1006
-> 1007 func_outputs = python_func(*func_args, **func_kwargs)
1008
1009 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
666 # the function a weak reference to itself to avoid a reference cycle.
667 with OptionalXlaContext(compile_with_xla):
--> 668 out = weak_wrapped_fn().__wrapped__(*args, **kwds)
669 return out
670
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/save_impl.py in wrapper(*args, **kwargs)
570 with autocast_variable.enable_auto_cast_variables(
571 layer._compute_dtype_object): # pylint: disable=protected-access
--> 572 ret = method(*args, **kwargs)
573 _restore_layer_losses(original_losses)
574 return ret
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/utils.py in wrap_with_training_arg(*args, **kwargs)
162 return wrapped_call(*args, **kwargs)
163
--> 164 return control_flow_util.smart_cond(
165 training, lambda: replace_training_and_call(True),
166 lambda: replace_training_and_call(False))
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/utils/control_flow_util.py in smart_cond(pred, true_fn, false_fn, name)
103 return tf.cond(
104 pred, true_fn=true_fn, false_fn=false_fn, name=name)
--> 105 return tf.__internal__.smart_cond.smart_cond(
106 pred, true_fn=true_fn, false_fn=false_fn, name=name)
107
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/framework/smart_cond.py in smart_cond(pred, true_fn, false_fn, name)
56 return true_fn()
57 else:
---> 58 return false_fn()
59 else:
60 return control_flow_ops.cond(pred, true_fn=true_fn, false_fn=false_fn,
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/utils.py in <lambda>()
164 return control_flow_util.smart_cond(
165 training, lambda: replace_training_and_call(True),
--> 166 lambda: replace_training_and_call(False))
167
168 # Create arg spec for decorated function. If 'training' is not defined in the
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/utils.py in replace_training_and_call(training)
160 def replace_training_and_call(training):
161 set_training_arg(training, training_arg_index, args, kwargs)
--> 162 return wrapped_call(*args, **kwargs)
163
164 return control_flow_util.smart_cond(
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/save_impl.py in call(inputs, *args, **kwargs)
649 return layer.keras_api.__call__ # pylint: disable=protected-access
650 def call(inputs, *args, **kwargs):
--> 651 return call_and_return_conditional_losses(inputs, *args, **kwargs)[0]
652 return _create_call_fn_decorator(layer, call)
653
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/save_impl.py in __call__(self, *args, **kwargs)
607 def __call__(self, *args, **kwargs):
608 self._maybe_trace(args, kwargs)
--> 609 return self.wrapped_call(*args, **kwargs)
610
611 def get_concrete_function(self, *args, **kwargs):
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
883
884 with OptionalXlaContext(self._jit_compile):
--> 885 result = self._call(*args, **kwds)
886
887 new_tracing_count = self.experimental_get_tracing_count()
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
931 # This is the first call of __call__, so we have to initialize.
932 initializers = []
--> 933 self._initialize(args, kwds, add_initializers_to=initializers)
934 finally:
935 # At this point we know that the initialization is complete (or less
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
757 self._graph_deleter = FunctionDeleter(self._lifted_initializer_graph)
758 self._concrete_stateful_fn = (
--> 759 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
760 *args, **kwds))
761
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
3064 args, kwargs = None, None
3065 with self._lock:
-> 3066 graph_function, _ = self._maybe_define_function(args, kwargs)
3067 return graph_function
3068
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
3461
3462 self._function_cache.missed.add(call_context_key)
-> 3463 graph_function = self._create_graph_function(args, kwargs)
3464 self._function_cache.primary[cache_key] = graph_function
3465
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
3296 arg_names = base_arg_names + missing_arg_names
3297 graph_function = ConcreteFunction(
-> 3298 func_graph_module.func_graph_from_py_func(
3299 self._name,
3300 self._python_function,
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes, acd_record_initial_resource_uses)
1005 _, original_func = tf_decorator.unwrap(python_func)
1006
-> 1007 func_outputs = python_func(*func_args, **func_kwargs)
1008
1009 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
666 # the function a weak reference to itself to avoid a reference cycle.
667 with OptionalXlaContext(compile_with_xla):
--> 668 out = weak_wrapped_fn().__wrapped__(*args, **kwds)
669 return out
670
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/save_impl.py in wrapper(*args, **kwargs)
570 with autocast_variable.enable_auto_cast_variables(
571 layer._compute_dtype_object): # pylint: disable=protected-access
--> 572 ret = method(*args, **kwargs)
573 _restore_layer_losses(original_losses)
574 return ret
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/utils.py in wrap_with_training_arg(*args, **kwargs)
162 return wrapped_call(*args, **kwargs)
163
--> 164 return control_flow_util.smart_cond(
165 training, lambda: replace_training_and_call(True),
166 lambda: replace_training_and_call(False))
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/utils/control_flow_util.py in smart_cond(pred, true_fn, false_fn, name)
103 return tf.cond(
104 pred, true_fn=true_fn, false_fn=false_fn, name=name)
--> 105 return tf.__internal__.smart_cond.smart_cond(
106 pred, true_fn=true_fn, false_fn=false_fn, name=name)
107
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/framework/smart_cond.py in smart_cond(pred, true_fn, false_fn, name)
56 return true_fn()
57 else:
---> 58 return false_fn()
59 else:
60 return control_flow_ops.cond(pred, true_fn=true_fn, false_fn=false_fn,
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/utils.py in <lambda>()
164 return control_flow_util.smart_cond(
165 training, lambda: replace_training_and_call(True),
--> 166 lambda: replace_training_and_call(False))
167
168 # Create arg spec for decorated function. If 'training' is not defined in the
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/utils.py in replace_training_and_call(training)
160 def replace_training_and_call(training):
161 set_training_arg(training, training_arg_index, args, kwargs)
--> 162 return wrapped_call(*args, **kwargs)
163
164 return control_flow_util.smart_cond(
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/save_impl.py in call_and_return_conditional_losses(*args, **kwargs)
631 def call_and_return_conditional_losses(*args, **kwargs):
632 """Returns layer (call_output, conditional losses) tuple."""
--> 633 call_output = layer_call(*args, **kwargs)
634 if version_utils.is_v1_layer_or_model(layer):
635 conditional_losses = layer.get_losses_for(
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/engine/functional.py in call(self, inputs, training, mask)
412 a list of tensors if there are more than one outputs.
413 """
--> 414 return self._run_internal_graph(
415 inputs, training=training, mask=mask)
416
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/engine/functional.py in _run_internal_graph(self, inputs, training, mask)
548
549 args, kwargs = node.map_arguments(tensor_dict)
--> 550 outputs = node.layer(*args, **kwargs)
551
552 # Update tensor_dict.
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
1035 with autocast_variable.enable_auto_cast_variables(
1036 self._compute_dtype_object):
-> 1037 outputs = call_fn(inputs, *args, **kwargs)
1038
1039 if self._activity_regularizer:
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/utils.py in return_outputs_and_add_losses(*args, **kwargs)
66 args = args[1:]
67
---> 68 outputs, losses = fn(*args, **kwargs)
69 layer.add_loss(losses, inputs=True)
70
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/utils.py in wrap_with_training_arg(*args, **kwargs)
162 return wrapped_call(*args, **kwargs)
163
--> 164 return control_flow_util.smart_cond(
165 training, lambda: replace_training_and_call(True),
166 lambda: replace_training_and_call(False))
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/utils/control_flow_util.py in smart_cond(pred, true_fn, false_fn, name)
103 return tf.cond(
104 pred, true_fn=true_fn, false_fn=false_fn, name=name)
--> 105 return tf.__internal__.smart_cond.smart_cond(
106 pred, true_fn=true_fn, false_fn=false_fn, name=name)
107
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/framework/smart_cond.py in smart_cond(pred, true_fn, false_fn, name)
56 return true_fn()
57 else:
---> 58 return false_fn()
59 else:
60 return control_flow_ops.cond(pred, true_fn=true_fn, false_fn=false_fn,
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/utils.py in <lambda>()
164 return control_flow_util.smart_cond(
165 training, lambda: replace_training_and_call(True),
--> 166 lambda: replace_training_and_call(False))
167
168 # Create arg spec for decorated function. If 'training' is not defined in the
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/utils.py in replace_training_and_call(training)
160 def replace_training_and_call(training):
161 set_training_arg(training, training_arg_index, args, kwargs)
--> 162 return wrapped_call(*args, **kwargs)
163
164 return control_flow_util.smart_cond(
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/save_impl.py in __call__(self, *args, **kwargs)
607 def __call__(self, *args, **kwargs):
608 self._maybe_trace(args, kwargs)
--> 609 return self.wrapped_call(*args, **kwargs)
610
611 def get_concrete_function(self, *args, **kwargs):
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
883
884 with OptionalXlaContext(self._jit_compile):
--> 885 result = self._call(*args, **kwds)
886
887 new_tracing_count = self.experimental_get_tracing_count()
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
922 # In this case we have not created variables on the first call. So we can
923 # run the first trace but we should fail if variables are created.
--> 924 results = self._stateful_fn(*args, **kwds)
925 if self._created_variables and not ALLOW_DYNAMIC_VARIABLE_CREATION:
926 raise ValueError("Creating variables on a non-first call to a function"
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/function.py in __call__(self, *args, **kwargs)
3036 with self._lock:
3037 (graph_function,
-> 3038 filtered_flat_args) = self._maybe_define_function(args, kwargs)
3039 return graph_function._call_flat(
3040 filtered_flat_args, captured_inputs=graph_function.captured_inputs) # pylint: disable=protected-access
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
3461
3462 self._function_cache.missed.add(call_context_key)
-> 3463 graph_function = self._create_graph_function(args, kwargs)
3464 self._function_cache.primary[cache_key] = graph_function
3465
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
3296 arg_names = base_arg_names + missing_arg_names
3297 graph_function = ConcreteFunction(
-> 3298 func_graph_module.func_graph_from_py_func(
3299 self._name,
3300 self._python_function,
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes, acd_record_initial_resource_uses)
1005 _, original_func = tf_decorator.unwrap(python_func)
1006
-> 1007 func_outputs = python_func(*func_args, **func_kwargs)
1008
1009 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
666 # the function a weak reference to itself to avoid a reference cycle.
667 with OptionalXlaContext(compile_with_xla):
--> 668 out = weak_wrapped_fn().__wrapped__(*args, **kwds)
669 return out
670
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/save_impl.py in wrapper(*args, **kwargs)
570 with autocast_variable.enable_auto_cast_variables(
571 layer._compute_dtype_object): # pylint: disable=protected-access
--> 572 ret = method(*args, **kwargs)
573 _restore_layer_losses(original_losses)
574 return ret
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/utils.py in wrap_with_training_arg(*args, **kwargs)
162 return wrapped_call(*args, **kwargs)
163
--> 164 return control_flow_util.smart_cond(
165 training, lambda: replace_training_and_call(True),
166 lambda: replace_training_and_call(False))
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/utils/control_flow_util.py in smart_cond(pred, true_fn, false_fn, name)
103 return tf.cond(
104 pred, true_fn=true_fn, false_fn=false_fn, name=name)
--> 105 return tf.__internal__.smart_cond.smart_cond(
106 pred, true_fn=true_fn, false_fn=false_fn, name=name)
107
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/framework/smart_cond.py in smart_cond(pred, true_fn, false_fn, name)
56 return true_fn()
57 else:
---> 58 return false_fn()
59 else:
60 return control_flow_ops.cond(pred, true_fn=true_fn, false_fn=false_fn,
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/utils.py in <lambda>()
164 return control_flow_util.smart_cond(
165 training, lambda: replace_training_and_call(True),
--> 166 lambda: replace_training_and_call(False))
167
168 # Create arg spec for decorated function. If 'training' is not defined in the
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/utils.py in replace_training_and_call(training)
160 def replace_training_and_call(training):
161 set_training_arg(training, training_arg_index, args, kwargs)
--> 162 return wrapped_call(*args, **kwargs)
163
164 return control_flow_util.smart_cond(
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/save_impl.py in call_and_return_conditional_losses(*args, **kwargs)
631 def call_and_return_conditional_losses(*args, **kwargs):
632 """Returns layer (call_output, conditional losses) tuple."""
--> 633 call_output = layer_call(*args, **kwargs)
634 if version_utils.is_v1_layer_or_model(layer):
635 conditional_losses = layer.get_losses_for(
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/transformers/models/distilbert/modeling_tf_distilbert.py in call(self, input_ids, attention_mask, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict, training, **kwargs)
560 **kwargs,
561 ):
--> 562 inputs = input_processing(
563 func=self.call,
564 config=self.config,
~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/transformers/modeling_tf_utils.py in input_processing(func, config, input_ids, **kwargs)
418 output[tensor_name] = input
419 else:
--> 420 output[parameter_names[i]] = input
421 elif isinstance(input, allowed_types) or input is None:
422 output[parameter_names[i]] = input
IndexError: list index out of range
```
Attaching code to replicate
```
import os
import tensorflow as tf
from tensorflow import keras
from keras import backend as K
from transformers import TFDistilBertModel, DistilBertConfig
from focal_loss import SparseCategoricalFocalLoss
MAX_LENGTH = 256
LAYER_DROPOUT = 0.2
LEARNING_RATE = 5e-5
RANDOM_STATE = 42
NUM_CLASSES=3
# Compatible with tensorflow backend
def focal_loss(gamma=2., alpha=.25):
def focal_loss_fixed(y_true, y_pred):
pt_1 = tf.where(tf.equal(y_true, 1), y_pred, tf.ones_like(y_pred))
pt_0 = tf.where(tf.equal(y_true, 0), y_pred, tf.zeros_like(y_pred))
return -K.mean(alpha * K.pow(1. - pt_1, gamma) * K.log(pt_1+K.epsilon())) - K.mean((1 - alpha) * K.pow(pt_0, gamma) * K.log(1. - pt_0 + K.epsilon()))
return focal_loss_fixed
def build_model(transformer, max_length=MAX_LENGTH):
# Define weight initializer with a random seed to ensure reproducibility
weight_initializer = tf.keras.initializers.GlorotNormal(seed=RANDOM_STATE)
# Define input layers
input_ids_layer = tf.keras.layers.Input(shape=(max_length,),
name='input_ids',
dtype='int32')
input_attention_layer = tf.keras.layers.Input(shape=(max_length,),
name='attention_mask',
dtype='int32')
# input_attention_layer = tf.keras.layers.Input(shape=(max_length,),
# name='attention_mask',
# dtype='int32')
# Extract [CLS] embedding
# It is a tf.Tensor of shape (batch_size, sequence_length, hidden_size=768).
last_hidden_state = transformer([input_ids_layer, input_attention_layer])[0]
cls_token = last_hidden_state[:, 0, :]
## ##
## Define additional dropout and dense layers here ##
## ##
# Define a FCN layer
output = tf.keras.layers.Dense(NUM_CLASSES,
activation='softmax',
kernel_initializer=weight_initializer,
kernel_constraint=None,
bias_initializer='zeros'
)(cls_token)
# Define the model
# {"input_ids": input_ids}
model = tf.keras.Model([input_ids_layer, input_attention_layer], output)
# Compile the model
model.compile(tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
loss=SparseCategoricalFocalLoss(gamma=2),
metrics=['accuracy'])
return model
def get_distil_bert_model(trainable=False, config=None):
if not config:
DISTILBERT_DROPOUT = 0.2
DISTILBERT_ATT_DROPOUT = 0.2
# Configure DistilBERT's initialization
config = DistilBertConfig(dropout=DISTILBERT_DROPOUT,
attention_dropout=DISTILBERT_ATT_DROPOUT,
output_hidden_states=False)
distilBert = TFDistilBertModel.from_pretrained('distilbert-base-uncased', config=config)
if trainable is False:
for layer in distilBert.layers:
layer.trainable = False
return distilBert
def get_compiled_model():
distilBert=get_distil_bert_model()
classification_model=build_model(distilBert)
return classification_model
model=get_compiled_model()
model.save("model_save_path")
```<|||||>> @hardianlawi Were you able to solve this issue? I am still facing this issue, can you help?
>
> Hi @Rocketknight1 i am still facing this issue. I am not able to save finetuned `TFDistilBertModel` model in keras with `model.save()` . Since the model is wrapped in `tf.keras.Model` I can't use `save_pretrained`. transformers version: 4.15 Platform: Ubuntu 20 Python version: 3.8 PyTorch version (GPU?): Tensorflow version (GPU?): 2.6.2 Using GPU in script?: Yes Using distributed or parallel set-up in script?: Yes
>
> Error:
>
> ```
> ---------------------------------------------------------------------------
> IndexError Traceback (most recent call last)
> /tmp/ipykernel_47167/2242234445.py in <module>
> ----> 1 model.save("save_path")
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/engine/training.py in save(self, filepath, overwrite, include_optimizer, save_format, signatures, options, save_traces)
> 2143 """
> 2144 # pylint: enable=line-too-long
> -> 2145 save.save_model(self, filepath, overwrite, include_optimizer, save_format,
> 2146 signatures, options, save_traces)
> 2147
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/save.py in save_model(model, filepath, overwrite, include_optimizer, save_format, signatures, options, save_traces)
> 147 else:
> 148 with generic_utils.SharedObjectSavingScope():
> --> 149 saved_model_save.save(model, filepath, overwrite, include_optimizer,
> 150 signatures, options, save_traces)
> 151
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/save.py in save(model, filepath, overwrite, include_optimizer, signatures, options, save_traces)
> 88 with K.deprecated_internal_learning_phase_scope(0):
> 89 with utils.keras_option_scope(save_traces):
> ---> 90 saved_nodes, node_paths = save_lib.save_and_return_nodes(
> 91 model, filepath, signatures, options)
> 92
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/saved_model/save.py in save_and_return_nodes(obj, export_dir, signatures, options, experimental_skip_checkpoint)
> 1226
> 1227 _, exported_graph, object_saver, asset_info, saved_nodes, node_paths = (
> -> 1228 _build_meta_graph(obj, signatures, options, meta_graph_def))
> 1229 saved_model.saved_model_schema_version = (
> 1230 pywrap_libexport.SAVED_MODEL_SCHEMA_VERSION)
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/saved_model/save.py in _build_meta_graph(obj, signatures, options, meta_graph_def)
> 1397
> 1398 with save_context.save_context(options):
> -> 1399 return _build_meta_graph_impl(obj, signatures, options, meta_graph_def)
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/saved_model/save.py in _build_meta_graph_impl(obj, signatures, options, meta_graph_def)
> 1333 checkpoint_graph_view = _AugmentedGraphView(obj)
> 1334 if signatures is None:
> -> 1335 signatures = signature_serialization.find_function_to_export(
> 1336 checkpoint_graph_view)
> 1337
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/saved_model/signature_serialization.py in find_function_to_export(saveable_view)
> 97 # If the user did not specify signatures, check the root object for a function
> 98 # that can be made into a signature.
> ---> 99 functions = saveable_view.list_functions(saveable_view.root)
> 100 signature = functions.get(DEFAULT_SIGNATURE_ATTR, None)
> 101 if signature is not None:
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/saved_model/save.py in list_functions(self, obj)
> 161 obj_functions = self._functions.get(obj, None)
> 162 if obj_functions is None:
> --> 163 obj_functions = obj._list_functions_for_serialization( # pylint: disable=protected-access
> 164 self._serialization_cache)
> 165 self._functions[obj] = obj_functions
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/engine/training.py in _list_functions_for_serialization(self, serialization_cache)
> 2810 self.predict_function = None
> 2811 self.train_tf_function = None
> -> 2812 functions = super(
> 2813 Model, self)._list_functions_for_serialization(serialization_cache)
> 2814 self.train_function = train_function
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/engine/base_layer.py in _list_functions_for_serialization(self, serialization_cache)
> 3083
> 3084 def _list_functions_for_serialization(self, serialization_cache):
> -> 3085 return (self._trackable_saved_model_saver
> 3086 .list_functions_for_serialization(serialization_cache))
> 3087
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/base_serialization.py in list_functions_for_serialization(self, serialization_cache)
> 91 return {}
> 92
> ---> 93 fns = self.functions_to_serialize(serialization_cache)
> 94
> 95 # The parent AutoTrackable class saves all user-defined tf.functions, and
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/layer_serialization.py in functions_to_serialize(self, serialization_cache)
> 71
> 72 def functions_to_serialize(self, serialization_cache):
> ---> 73 return (self._get_serialized_attributes(
> 74 serialization_cache).functions_to_serialize)
> 75
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/layer_serialization.py in _get_serialized_attributes(self, serialization_cache)
> 87 return serialized_attr
> 88
> ---> 89 object_dict, function_dict = self._get_serialized_attributes_internal(
> 90 serialization_cache)
> 91
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/model_serialization.py in _get_serialized_attributes_internal(self, serialization_cache)
> 54 # the ones serialized by Layer.
> 55 objects, functions = (
> ---> 56 super(ModelSavedModelSaver, self)._get_serialized_attributes_internal(
> 57 serialization_cache))
> 58 functions['_default_save_signature'] = default_signature
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/layer_serialization.py in _get_serialized_attributes_internal(self, serialization_cache)
> 97 """Returns dictionary of serialized attributes."""
> 98 objects = save_impl.wrap_layer_objects(self.obj, serialization_cache)
> ---> 99 functions = save_impl.wrap_layer_functions(self.obj, serialization_cache)
> 100 # Attribute validator requires that the default save signature is added to
> 101 # function dict, even if the value is None.
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/save_impl.py in wrap_layer_functions(layer, serialization_cache)
> 195 for fn in fns.values():
> 196 if fn is not None and not isinstance(fn, LayerCall):
> --> 197 fn.get_concrete_function()
> 198
> 199 # Restore overwritten functions and losses
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/contextlib.py in __exit__(self, type, value, traceback)
> 118 if type is None:
> 119 try:
> --> 120 next(self.gen)
> 121 except StopIteration:
> 122 return False
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/save_impl.py in tracing_scope()
> 357 if training is not None:
> 358 with K.deprecated_internal_learning_phase_scope(training):
> --> 359 fn.get_concrete_function(*args, **kwargs)
> 360 else:
> 361 fn.get_concrete_function(*args, **kwargs)
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in get_concrete_function(self, *args, **kwargs)
> 1231 def get_concrete_function(self, *args, **kwargs):
> 1232 # Implements GenericFunction.get_concrete_function.
> -> 1233 concrete = self._get_concrete_function_garbage_collected(*args, **kwargs)
> 1234 concrete._garbage_collector.release() # pylint: disable=protected-access
> 1235 return concrete
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _get_concrete_function_garbage_collected(self, *args, **kwargs)
> 1211 if self._stateful_fn is None:
> 1212 initializers = []
> -> 1213 self._initialize(args, kwargs, add_initializers_to=initializers)
> 1214 self._initialize_uninitialized_variables(initializers)
> 1215
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
> 757 self._graph_deleter = FunctionDeleter(self._lifted_initializer_graph)
> 758 self._concrete_stateful_fn = (
> --> 759 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
> 760 *args, **kwds))
> 761
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
> 3064 args, kwargs = None, None
> 3065 with self._lock:
> -> 3066 graph_function, _ = self._maybe_define_function(args, kwargs)
> 3067 return graph_function
> 3068
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
> 3461
> 3462 self._function_cache.missed.add(call_context_key)
> -> 3463 graph_function = self._create_graph_function(args, kwargs)
> 3464 self._function_cache.primary[cache_key] = graph_function
> 3465
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
> 3296 arg_names = base_arg_names + missing_arg_names
> 3297 graph_function = ConcreteFunction(
> -> 3298 func_graph_module.func_graph_from_py_func(
> 3299 self._name,
> 3300 self._python_function,
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes, acd_record_initial_resource_uses)
> 1005 _, original_func = tf_decorator.unwrap(python_func)
> 1006
> -> 1007 func_outputs = python_func(*func_args, **func_kwargs)
> 1008
> 1009 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
> 666 # the function a weak reference to itself to avoid a reference cycle.
> 667 with OptionalXlaContext(compile_with_xla):
> --> 668 out = weak_wrapped_fn().__wrapped__(*args, **kwds)
> 669 return out
> 670
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/save_impl.py in wrapper(*args, **kwargs)
> 570 with autocast_variable.enable_auto_cast_variables(
> 571 layer._compute_dtype_object): # pylint: disable=protected-access
> --> 572 ret = method(*args, **kwargs)
> 573 _restore_layer_losses(original_losses)
> 574 return ret
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/utils.py in wrap_with_training_arg(*args, **kwargs)
> 162 return wrapped_call(*args, **kwargs)
> 163
> --> 164 return control_flow_util.smart_cond(
> 165 training, lambda: replace_training_and_call(True),
> 166 lambda: replace_training_and_call(False))
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/utils/control_flow_util.py in smart_cond(pred, true_fn, false_fn, name)
> 103 return tf.cond(
> 104 pred, true_fn=true_fn, false_fn=false_fn, name=name)
> --> 105 return tf.__internal__.smart_cond.smart_cond(
> 106 pred, true_fn=true_fn, false_fn=false_fn, name=name)
> 107
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/framework/smart_cond.py in smart_cond(pred, true_fn, false_fn, name)
> 56 return true_fn()
> 57 else:
> ---> 58 return false_fn()
> 59 else:
> 60 return control_flow_ops.cond(pred, true_fn=true_fn, false_fn=false_fn,
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/utils.py in <lambda>()
> 164 return control_flow_util.smart_cond(
> 165 training, lambda: replace_training_and_call(True),
> --> 166 lambda: replace_training_and_call(False))
> 167
> 168 # Create arg spec for decorated function. If 'training' is not defined in the
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/utils.py in replace_training_and_call(training)
> 160 def replace_training_and_call(training):
> 161 set_training_arg(training, training_arg_index, args, kwargs)
> --> 162 return wrapped_call(*args, **kwargs)
> 163
> 164 return control_flow_util.smart_cond(
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/save_impl.py in call(inputs, *args, **kwargs)
> 649 return layer.keras_api.__call__ # pylint: disable=protected-access
> 650 def call(inputs, *args, **kwargs):
> --> 651 return call_and_return_conditional_losses(inputs, *args, **kwargs)[0]
> 652 return _create_call_fn_decorator(layer, call)
> 653
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/save_impl.py in __call__(self, *args, **kwargs)
> 607 def __call__(self, *args, **kwargs):
> 608 self._maybe_trace(args, kwargs)
> --> 609 return self.wrapped_call(*args, **kwargs)
> 610
> 611 def get_concrete_function(self, *args, **kwargs):
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
> 883
> 884 with OptionalXlaContext(self._jit_compile):
> --> 885 result = self._call(*args, **kwds)
> 886
> 887 new_tracing_count = self.experimental_get_tracing_count()
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
> 931 # This is the first call of __call__, so we have to initialize.
> 932 initializers = []
> --> 933 self._initialize(args, kwds, add_initializers_to=initializers)
> 934 finally:
> 935 # At this point we know that the initialization is complete (or less
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
> 757 self._graph_deleter = FunctionDeleter(self._lifted_initializer_graph)
> 758 self._concrete_stateful_fn = (
> --> 759 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
> 760 *args, **kwds))
> 761
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
> 3064 args, kwargs = None, None
> 3065 with self._lock:
> -> 3066 graph_function, _ = self._maybe_define_function(args, kwargs)
> 3067 return graph_function
> 3068
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
> 3461
> 3462 self._function_cache.missed.add(call_context_key)
> -> 3463 graph_function = self._create_graph_function(args, kwargs)
> 3464 self._function_cache.primary[cache_key] = graph_function
> 3465
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
> 3296 arg_names = base_arg_names + missing_arg_names
> 3297 graph_function = ConcreteFunction(
> -> 3298 func_graph_module.func_graph_from_py_func(
> 3299 self._name,
> 3300 self._python_function,
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes, acd_record_initial_resource_uses)
> 1005 _, original_func = tf_decorator.unwrap(python_func)
> 1006
> -> 1007 func_outputs = python_func(*func_args, **func_kwargs)
> 1008
> 1009 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
> 666 # the function a weak reference to itself to avoid a reference cycle.
> 667 with OptionalXlaContext(compile_with_xla):
> --> 668 out = weak_wrapped_fn().__wrapped__(*args, **kwds)
> 669 return out
> 670
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/save_impl.py in wrapper(*args, **kwargs)
> 570 with autocast_variable.enable_auto_cast_variables(
> 571 layer._compute_dtype_object): # pylint: disable=protected-access
> --> 572 ret = method(*args, **kwargs)
> 573 _restore_layer_losses(original_losses)
> 574 return ret
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/utils.py in wrap_with_training_arg(*args, **kwargs)
> 162 return wrapped_call(*args, **kwargs)
> 163
> --> 164 return control_flow_util.smart_cond(
> 165 training, lambda: replace_training_and_call(True),
> 166 lambda: replace_training_and_call(False))
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/utils/control_flow_util.py in smart_cond(pred, true_fn, false_fn, name)
> 103 return tf.cond(
> 104 pred, true_fn=true_fn, false_fn=false_fn, name=name)
> --> 105 return tf.__internal__.smart_cond.smart_cond(
> 106 pred, true_fn=true_fn, false_fn=false_fn, name=name)
> 107
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/framework/smart_cond.py in smart_cond(pred, true_fn, false_fn, name)
> 56 return true_fn()
> 57 else:
> ---> 58 return false_fn()
> 59 else:
> 60 return control_flow_ops.cond(pred, true_fn=true_fn, false_fn=false_fn,
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/utils.py in <lambda>()
> 164 return control_flow_util.smart_cond(
> 165 training, lambda: replace_training_and_call(True),
> --> 166 lambda: replace_training_and_call(False))
> 167
> 168 # Create arg spec for decorated function. If 'training' is not defined in the
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/utils.py in replace_training_and_call(training)
> 160 def replace_training_and_call(training):
> 161 set_training_arg(training, training_arg_index, args, kwargs)
> --> 162 return wrapped_call(*args, **kwargs)
> 163
> 164 return control_flow_util.smart_cond(
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/save_impl.py in call_and_return_conditional_losses(*args, **kwargs)
> 631 def call_and_return_conditional_losses(*args, **kwargs):
> 632 """Returns layer (call_output, conditional losses) tuple."""
> --> 633 call_output = layer_call(*args, **kwargs)
> 634 if version_utils.is_v1_layer_or_model(layer):
> 635 conditional_losses = layer.get_losses_for(
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/engine/functional.py in call(self, inputs, training, mask)
> 412 a list of tensors if there are more than one outputs.
> 413 """
> --> 414 return self._run_internal_graph(
> 415 inputs, training=training, mask=mask)
> 416
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/engine/functional.py in _run_internal_graph(self, inputs, training, mask)
> 548
> 549 args, kwargs = node.map_arguments(tensor_dict)
> --> 550 outputs = node.layer(*args, **kwargs)
> 551
> 552 # Update tensor_dict.
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
> 1035 with autocast_variable.enable_auto_cast_variables(
> 1036 self._compute_dtype_object):
> -> 1037 outputs = call_fn(inputs, *args, **kwargs)
> 1038
> 1039 if self._activity_regularizer:
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/utils.py in return_outputs_and_add_losses(*args, **kwargs)
> 66 args = args[1:]
> 67
> ---> 68 outputs, losses = fn(*args, **kwargs)
> 69 layer.add_loss(losses, inputs=True)
> 70
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/utils.py in wrap_with_training_arg(*args, **kwargs)
> 162 return wrapped_call(*args, **kwargs)
> 163
> --> 164 return control_flow_util.smart_cond(
> 165 training, lambda: replace_training_and_call(True),
> 166 lambda: replace_training_and_call(False))
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/utils/control_flow_util.py in smart_cond(pred, true_fn, false_fn, name)
> 103 return tf.cond(
> 104 pred, true_fn=true_fn, false_fn=false_fn, name=name)
> --> 105 return tf.__internal__.smart_cond.smart_cond(
> 106 pred, true_fn=true_fn, false_fn=false_fn, name=name)
> 107
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/framework/smart_cond.py in smart_cond(pred, true_fn, false_fn, name)
> 56 return true_fn()
> 57 else:
> ---> 58 return false_fn()
> 59 else:
> 60 return control_flow_ops.cond(pred, true_fn=true_fn, false_fn=false_fn,
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/utils.py in <lambda>()
> 164 return control_flow_util.smart_cond(
> 165 training, lambda: replace_training_and_call(True),
> --> 166 lambda: replace_training_and_call(False))
> 167
> 168 # Create arg spec for decorated function. If 'training' is not defined in the
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/utils.py in replace_training_and_call(training)
> 160 def replace_training_and_call(training):
> 161 set_training_arg(training, training_arg_index, args, kwargs)
> --> 162 return wrapped_call(*args, **kwargs)
> 163
> 164 return control_flow_util.smart_cond(
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/save_impl.py in __call__(self, *args, **kwargs)
> 607 def __call__(self, *args, **kwargs):
> 608 self._maybe_trace(args, kwargs)
> --> 609 return self.wrapped_call(*args, **kwargs)
> 610
> 611 def get_concrete_function(self, *args, **kwargs):
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
> 883
> 884 with OptionalXlaContext(self._jit_compile):
> --> 885 result = self._call(*args, **kwds)
> 886
> 887 new_tracing_count = self.experimental_get_tracing_count()
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
> 922 # In this case we have not created variables on the first call. So we can
> 923 # run the first trace but we should fail if variables are created.
> --> 924 results = self._stateful_fn(*args, **kwds)
> 925 if self._created_variables and not ALLOW_DYNAMIC_VARIABLE_CREATION:
> 926 raise ValueError("Creating variables on a non-first call to a function"
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/function.py in __call__(self, *args, **kwargs)
> 3036 with self._lock:
> 3037 (graph_function,
> -> 3038 filtered_flat_args) = self._maybe_define_function(args, kwargs)
> 3039 return graph_function._call_flat(
> 3040 filtered_flat_args, captured_inputs=graph_function.captured_inputs) # pylint: disable=protected-access
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
> 3461
> 3462 self._function_cache.missed.add(call_context_key)
> -> 3463 graph_function = self._create_graph_function(args, kwargs)
> 3464 self._function_cache.primary[cache_key] = graph_function
> 3465
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
> 3296 arg_names = base_arg_names + missing_arg_names
> 3297 graph_function = ConcreteFunction(
> -> 3298 func_graph_module.func_graph_from_py_func(
> 3299 self._name,
> 3300 self._python_function,
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes, acd_record_initial_resource_uses)
> 1005 _, original_func = tf_decorator.unwrap(python_func)
> 1006
> -> 1007 func_outputs = python_func(*func_args, **func_kwargs)
> 1008
> 1009 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
> 666 # the function a weak reference to itself to avoid a reference cycle.
> 667 with OptionalXlaContext(compile_with_xla):
> --> 668 out = weak_wrapped_fn().__wrapped__(*args, **kwds)
> 669 return out
> 670
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/save_impl.py in wrapper(*args, **kwargs)
> 570 with autocast_variable.enable_auto_cast_variables(
> 571 layer._compute_dtype_object): # pylint: disable=protected-access
> --> 572 ret = method(*args, **kwargs)
> 573 _restore_layer_losses(original_losses)
> 574 return ret
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/utils.py in wrap_with_training_arg(*args, **kwargs)
> 162 return wrapped_call(*args, **kwargs)
> 163
> --> 164 return control_flow_util.smart_cond(
> 165 training, lambda: replace_training_and_call(True),
> 166 lambda: replace_training_and_call(False))
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/utils/control_flow_util.py in smart_cond(pred, true_fn, false_fn, name)
> 103 return tf.cond(
> 104 pred, true_fn=true_fn, false_fn=false_fn, name=name)
> --> 105 return tf.__internal__.smart_cond.smart_cond(
> 106 pred, true_fn=true_fn, false_fn=false_fn, name=name)
> 107
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/tensorflow/python/framework/smart_cond.py in smart_cond(pred, true_fn, false_fn, name)
> 56 return true_fn()
> 57 else:
> ---> 58 return false_fn()
> 59 else:
> 60 return control_flow_ops.cond(pred, true_fn=true_fn, false_fn=false_fn,
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/utils.py in <lambda>()
> 164 return control_flow_util.smart_cond(
> 165 training, lambda: replace_training_and_call(True),
> --> 166 lambda: replace_training_and_call(False))
> 167
> 168 # Create arg spec for decorated function. If 'training' is not defined in the
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/utils.py in replace_training_and_call(training)
> 160 def replace_training_and_call(training):
> 161 set_training_arg(training, training_arg_index, args, kwargs)
> --> 162 return wrapped_call(*args, **kwargs)
> 163
> 164 return control_flow_util.smart_cond(
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/keras/saving/saved_model/save_impl.py in call_and_return_conditional_losses(*args, **kwargs)
> 631 def call_and_return_conditional_losses(*args, **kwargs):
> 632 """Returns layer (call_output, conditional losses) tuple."""
> --> 633 call_output = layer_call(*args, **kwargs)
> 634 if version_utils.is_v1_layer_or_model(layer):
> 635 conditional_losses = layer.get_losses_for(
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/transformers/models/distilbert/modeling_tf_distilbert.py in call(self, input_ids, attention_mask, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict, training, **kwargs)
> 560 **kwargs,
> 561 ):
> --> 562 inputs = input_processing(
> 563 func=self.call,
> 564 config=self.config,
>
> ~/SageMaker/custom-miniconda/miniconda/envs/custom_python_38/lib/python3.8/site-packages/transformers/modeling_tf_utils.py in input_processing(func, config, input_ids, **kwargs)
> 418 output[tensor_name] = input
> 419 else:
> --> 420 output[parameter_names[i]] = input
> 421 elif isinstance(input, allowed_types) or input is None:
> 422 output[parameter_names[i]] = input
>
> IndexError: list index out of range
> ```
>
> Attaching code to replicate
>
> ```
> import os
>
> import tensorflow as tf
> from tensorflow import keras
> from keras import backend as K
> from transformers import TFDistilBertModel, DistilBertConfig
> from focal_loss import SparseCategoricalFocalLoss
>
> MAX_LENGTH = 256
> LAYER_DROPOUT = 0.2
> LEARNING_RATE = 5e-5
> RANDOM_STATE = 42
> NUM_CLASSES=3
>
>
> # Compatible with tensorflow backend
>
> def focal_loss(gamma=2., alpha=.25):
> def focal_loss_fixed(y_true, y_pred):
> pt_1 = tf.where(tf.equal(y_true, 1), y_pred, tf.ones_like(y_pred))
> pt_0 = tf.where(tf.equal(y_true, 0), y_pred, tf.zeros_like(y_pred))
> return -K.mean(alpha * K.pow(1. - pt_1, gamma) * K.log(pt_1+K.epsilon())) - K.mean((1 - alpha) * K.pow(pt_0, gamma) * K.log(1. - pt_0 + K.epsilon()))
> return focal_loss_fixed
>
> def build_model(transformer, max_length=MAX_LENGTH):
>
> # Define weight initializer with a random seed to ensure reproducibility
> weight_initializer = tf.keras.initializers.GlorotNormal(seed=RANDOM_STATE)
>
> # Define input layers
> input_ids_layer = tf.keras.layers.Input(shape=(max_length,),
> name='input_ids',
> dtype='int32')
> input_attention_layer = tf.keras.layers.Input(shape=(max_length,),
> name='attention_mask',
> dtype='int32')
> # input_attention_layer = tf.keras.layers.Input(shape=(max_length,),
> # name='attention_mask',
> # dtype='int32')
>
> # Extract [CLS] embedding
> # It is a tf.Tensor of shape (batch_size, sequence_length, hidden_size=768).
> last_hidden_state = transformer([input_ids_layer, input_attention_layer])[0]
> cls_token = last_hidden_state[:, 0, :]
>
> ## ##
> ## Define additional dropout and dense layers here ##
> ## ##
>
> # Define a FCN layer
> output = tf.keras.layers.Dense(NUM_CLASSES,
> activation='softmax',
> kernel_initializer=weight_initializer,
> kernel_constraint=None,
> bias_initializer='zeros'
> )(cls_token)
>
> # Define the model
> # {"input_ids": input_ids}
> model = tf.keras.Model([input_ids_layer, input_attention_layer], output)
>
> # Compile the model
> model.compile(tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
> loss=SparseCategoricalFocalLoss(gamma=2),
> metrics=['accuracy'])
>
> return model
>
>
>
>
> def get_distil_bert_model(trainable=False, config=None):
> if not config:
> DISTILBERT_DROPOUT = 0.2
> DISTILBERT_ATT_DROPOUT = 0.2
>
> # Configure DistilBERT's initialization
> config = DistilBertConfig(dropout=DISTILBERT_DROPOUT,
> attention_dropout=DISTILBERT_ATT_DROPOUT,
> output_hidden_states=False)
>
> distilBert = TFDistilBertModel.from_pretrained('distilbert-base-uncased', config=config)
>
> if trainable is False:
> for layer in distilBert.layers:
> layer.trainable = False
>
> return distilBert
>
> def get_compiled_model():
> distilBert=get_distil_bert_model()
> classification_model=build_model(distilBert)
> return classification_model
>
> model=get_compiled_model()
> model.save("model_save_path")
> ```
I have the same problem, How to solve it?
<|||||>@kapilkd13 @Zjq9409 I completely switched to Pytorch and Pytorch Lightning since they made my life easier :') |
transformers | 14,151 | closed | Add vision_encoder_decoder to models/__init__.py | # What does this PR do?
The recently added `vision_encoder_decoder` wasn't added to `models/__init__.py`. This had some consequence, like `utils.check_repo.get_model_modules()` missing `vision_encoder_decoder`, and therefore models like `VisionEncoderDecoderModel` (and the upcoming Flax/TF versions) would pass `check_all_models_are_tested` even without providing a test file.
This (one-line) PR fixes this issue.
## Who can review?
@sgugger @NielsRogge | 10-25-2021 21:14:20 | 10-25-2021 21:14:20 | Yes. It's easy. I can fix it in the same PR? It's ok?<|||||>Yes we can't merge it otherwise ;-)<|||||>Done.
P.S.: I don't know the exact reason for which we put encoder-decoder model family in the list `_ignore_modules`, but I saw`modeling_encoder_decoder` was already there since the 1st version `6ba540b7` of `check_repo.py`.
https://github.com/huggingface/transformers/blob/6ba540b7475f095d591b4766cac897007b1d5db0/utils/check_repo.py#L108-L112
The error reported is
```
test_modeling_vision_encoder_decoder.py should define `all_model_classes` to apply common tests to the models it tests. If this intentional, add the test filename to `TEST_FILES_WITH_NO_COMMON_TESTS` in the file `utils/check_repo.py`.
```
I think `TEST_FILES_WITH_NO_COMMON_TESTS` is the place to address (?), but maybe I don't see the full picture here.<|||||>No that's the right way. |
transformers | 14,150 | closed | Typo on ner accelerate example code | # What does this PR do?
Very simple typo on NER accelerate example code:)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-25-2021 19:10:36 | 10-25-2021 19:10:36 | |
transformers | 14,149 | closed | Examples: Tokenize only max_samples used for train/eval/prediction | # π Feature request
If I understand what's going on, when using flags for max samples to be used for train/eval/prediction, first the tokenizer is called on the whole dataset. This operations seems quite involved and it takes minutes for a decent size dataset. Is it possible to tokenize only max samples as defined by the corresponding parameters to save time?
## Motivation
I use the max sample flags to debug my code. It would be nice to just tokenize the samples that will be used instead of the whole dataset.
| 10-25-2021 18:28:53 | 10-25-2021 18:28:53 | I think this can be controlled from user level code. For example, in this line:
https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py#L394
modify the function to receive an argument on how many examples to tokenize. I'm guesstimating that the max samples param was recently introduced and older examples don't use it. I'll change the title of the issue to better reflect this. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Sorry @ioana-blue, I had missed this issue - let me ping @sgugger for advice, but please note that he is off right now and will answer only when he's back. Thanks!<|||||>No worries, I think this can be controlled from the user space and I did so in my most recent code. I think the examples could be fixed as well as - at least - I use them for inspiration and they are great :) <|||||>You can just move the tokenization line after the datasets are truncated. It will take three lines instead of one since by then there will be three separate datasets instead of one, which is the main reason it's not done in the examples as they are, or they will become less readable.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,148 | closed | Add TFVisionEncoderDecoderModel | # What does this PR do?
To make Vision-Encoder-Text-Decoder family complete by adding `TFVisionEncoderDecoderModel`.
To complete this PR, it requires to wait #13778 being merged to master (then rebase)
(And if we want to include a real integration test using the recent image-captioning ViT + GPT2 model, need to wait #14038 too)
| 10-25-2021 16:36:25 | 10-25-2021 16:36:25 | @sgugger, thank you for your review, I have addressed them.
I am impressed by your ability to spot the `fits in one line` places.
I feel that sometimes `make style` can reformat in this case, but doesn't work well in a few places, as happened in this PR.
I dive a bit deeper, and found:
this one won't be reformatted by `make style`
```
def prepare_inputs_for_generation(
self,
decoder_input_ids,
past,
use_cache=None,
**kwargs,
):
```
but the following one will work well.
```
def prepare_inputs_for_generation(
self,
decoder_input_ids,
past,
use_cache=None,
**kwargs
):
```
The difference is the ending comma after the last argument. Is this a bug? I can open an issue if so.
Maybe I should just remove all the commas after the last arguments in my PR.<|||||>> This is a great addition @ydshieh! Thanks a lot for the contribution.
>
> From my side it would be great if we could:
>
> * remove all pytorch specific changes that are not needed to get the TF version working (I'll tackle this in a future PR :-) )
>
> * add one slow test that ensures that the model works correctly
>
>
> Thanks a bunch!
Hi @patrickvonplaten
- OK for removing the changes on PT code.
- For the test, I think I forgot to do the same as I have done for PT/Flax
https://github.com/huggingface/transformers/blob/8395f14de6068012787d83989c3627c3df6a252b/tests/test_modeling_vision_encoder_decoder.py#L666
https://github.com/huggingface/transformers/blob/8395f14de6068012787d83989c3627c3df6a252b/tests/test_modeling_flax_vision_encoder_decoder.py#L467
I will add it to TF test :)<|||||>I removed the changes on PT code. I added
```
class TFViT2GPT2ModelIntegrationTest(unittest.TestCase):
@slow
def test_inference_coco_en(self):
```
here
https://github.com/huggingface/transformers/blob/a179c4a5087e48127914e26e9895a455b42bac0d/tests/test_modeling_tf_vision_encoder_decoder.py#L775<|||||>Hi, @patrickvonplaten @Rocketknight1 @NielsRogge
I removed the changes on PT code. I also added
```
class TFViT2GPT2ModelIntegrationTest(unittest.TestCase):
@slow
def test_inference_coco_en(self):
```
here
https://github.com/huggingface/transformers/blob/b3255863540fac38d49e5067c14fef7ae66dc31f/tests/test_modeling_tf_vision_encoder_decoder.py#L769
This PR is ready for review when you have the time :-)<|||||>> * auto.rst no?
Yes. I added the (empty) file to commit by mistake during git rebase/merge.<|||||>@NielsRogge @Rocketknight1 - could you guys take a look here as well?<|||||>Applied @sgugger review suggestions. Failed tests are unrelated.<|||||>Thanks again for all your work on this! |
transformers | 14,147 | closed | OnnxRuntime error " The input tensor cannot be reshaped to the requested shape." | Hello! I have quantised the bart model for abstractive summarisation using transformers.onnx as suggested at https://huggingface.co/transformers/serialization.html.When I run the session with inputs for getting outputs back, I get a shape mismatch error. I have checked that the input shapes of my input text are in accordance with
https://github.com/huggingface/transformers/blob/master/src/transformers/models/bart/configuration_bart.py
Here is the error message I am receiving.
`onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'Reshape_109' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:42 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape &, std::vector<int64_t> &, bool) gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{2}, requested shape:{1,1}`
## Environment info
- `transformers` version: 4.11.3
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.6
- PyTorch version (GPU?): 1.9.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
Models:
@patil-suraj
@patrickvonplaten
## Information
Model I am using (Bart, i.e facebook/bart-large-cnn ...):
| 10-25-2021 15:44:54 | 10-25-2021 15:44:54 | Not sure about this error, but we have added an experimental bart onnx generation example [here](https://github.com/huggingface/transformers/tree/master/examples/onnx/pytorch/translation), maybe that could help :) <|||||>Thank you, That would be of great help!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,146 | closed | Questions when training language models from scratch | I'm following the guide here (https://github.com/huggingface/blog/blob/master/how-to-train.md, https://huggingface.co/blog/how-to-train) to train a RoBERTa-like model from scratch. (With my own tokenizer and dataset)
However, when I run **run_mlm.py** (https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_mlm.py) to train my model with masking task, the following messages appear:
```
All model checkpoint weights were used when initializing RobertaForMaskedLM.
All the weights of RobertaForMaskedLM were initialized from the model checkpoint at roberta-base.
If your task is similar to the task the model of the checkpoint was trained on, you can already use RobertaForMaskedLM for predictions without further training.
```
And here's my content of **config.json**:
```
{
"_name_or_path": "roberta-base",
"architectures": [
"RobertaForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"classifier_dropout": null,
"eos_token_id": 2,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.12.0.dev0",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 50265
}
```
I'm wondering does it mean that I'm training from scratch with **"the pretrained weight"** of RoBERTa? And if it's training from the pretrained weights, is there a way to use randomly initiated weights rather than the pretrained ones?
Thanks a lot.
| 10-25-2021 15:29:04 | 10-25-2021 15:29:04 | For those considering the same question, you could refer to [this thread](https://stackoverflow.com/questions/69720454/questions-when-training-language-models-from-scratch-with-huggingface/69721327#69721327) for more explanation. |
transformers | 14,145 | closed | Add Camembert to exportable models with ONNX | This PR adds Camembert to the family of exportable models with ONNX (essentially the same config as in RoBERTa). | 10-25-2021 15:19:29 | 10-25-2021 15:19:29 | I believe that this feature has already been merged in #14059 |
transformers | 14,144 | closed | Ner naming fixes 0.8 | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-25-2021 14:36:48 | 10-25-2021 14:36:48 | |
transformers | 14,143 | closed | Fix AttributeError: 'MMBTConfig' object has no attribute 'use_return_dict' | # What does this PR do?
Fixes ```AttributeError: 'MMBTConfig' object has no attribute 'use_return_dict'```
Attributes are copied from `transformers.PreTrainedConfig.__dict__` which does not contain object properties and `use_return_dict` is an object property that's not in the `__dict__`.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-25-2021 14:28:37 | 10-25-2021 14:28:37 | The MMBT config is not a `PretrainedConfig`, so there is no point trying to use it as such (the MMBT model in general is completely broken and the original authors did not help/release another opensource version of their models we could use as a guide).<|||||>Thanks @sgugger
Would you please provide a link to this implementation? I'd be interested to proceed on this.<|||||>There is none that i know of, that is the problem.<|||||>Ah! Any chance we can describe what are the issues with current implementation?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,142 | closed | Replace assertions with ValueError exception | # What does this PR do?
Updated masked-language modeling examples in PyTorch with convention defined by #12789
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-25-2021 13:09:23 | 10-25-2021 13:09:23 | |
transformers | 14,141 | closed | Enable DefaultDataCollator class | We have a `DefaultDataCollator` class that was supposed to call the `default_data_collator` functions, but with an object-like API that matched the other `DataCollator` classes. This was never actually enabled, oops. | 10-25-2021 13:06:44 | 10-25-2021 13:06:44 | |
transformers | 14,140 | closed | Remove unneeded `to_tensor()` in TF inline example | Our TF code used to need an extra call to convert RaggedTensor to Tensor, but our Datasets TF formatter is smarter now and won't return RaggedTensors for Tensors that are actually not ragged. Thanks to @lewtun for spotting it! | 10-25-2021 13:02:28 | 10-25-2021 13:02:28 | |
transformers | 14,139 | closed | [Design proposal] Fix EncoderDecoderModel classes to be more like BART and T5 | # What does this PR do?
TLDR: recently, we've added some new classes called `SpeechEncoderDecoderModel` (#13186) and `VisionEncoderDecoderModel` (#13874). These are very similar to the existing `EncoderDecoderModel` class, allowing to combine any speech/vision encoder with any text decoder. However, there's currently an issue in the sense that we have 2 flavors of decoder models in the library, namely:
* "shifting afterwards": those that shift the logits inside them, when calculating the loss. Examples are `BertLMHeadModel`, `BertGeneration`, `BigbirdForCausalLM`, `GPT2LMHeadModel`, `GPTNeoForCausalLM`, `RobertaForCausalLM`. For these models, the `input_ids` you provide should be equal to the `labels`, and the shifting occurs as follows:
```
loss = None
if labels is not None:
# Shift so that tokens < n predict n
shift_logits = lm_logits[..., :-1, :].contiguous()
shift_labels = labels[..., 1:].contiguous()
# Flatten the tokens
loss_fct = CrossEntropyLoss()
loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))
```
* "shifting before": those that don't shift the logits, but expect this shifting to already have happened before providing the `input_ids` to the decoder. Examples are `BartForCausalLM`, `PegasusForCausalLM`, and the recently added `Speech2Text2ForCausalLM` and `TrOCRForCausalLM`. Here, the loss calculation is simply:
```
loss = None
if labels is not None:
loss_fct = CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1))
```
## Proposal
In any case, we would like the `EncoderDecoderModel` classes to automatically create the `decoder_input_ids` based on the `labels` provided by the user, similar to how this is done for "vanilla" seq2seq models as BART and T5. However, in order to support both flavors of decoders in the EncoderDecoderModel classes, there are 2 options, either:
* option 1: work with an if-else statement, that creates the `decoder_input_ids` based on the "type" of decoder (type 1 would require to simply set `decoder_input_ids = labels`, whereas type 2 would require to shift the `decoder_input_ids` to be a shifted version of the `labels`). This way, the loss can be computed inside each decoder individually. The disadvantage is that one would need to keep some sort of an exhaustive list of classes that enumerates the model classes of the 2 flavors of decoders.
* option 2: don't compute the loss inside the decoders themselves, but rather in the `EncoderDecoderModel` classes itself. In that way, one can implement the `EncoderDecoderModel` classes similar to the "vanilla" models T5 and BART, which means: (1) first create the `decoder_input_ids` based on the labels provided by the user (i.e. shifting), (2) next forward these through the decoder, and then (3) use the `logits` of the decoder to compute the loss.
Both would allow to use the `DataCollatorForSeq2Seq`, and in turn the `Seq2SeqTrainer` for the EncoderDecoderModel classes. This enables us to make demo notebooks to fine-tune these model classes using the `Seq2SeqTrainer`.
This draft PR implements option 2. | 10-25-2021 10:36:07 | 10-25-2021 10:36:07 | As discussed offline with @NielsRogge, I'm also in favor of option 2) here because:
- It allows for a nicer, unified API (we can wrap away the complexity of `"shifting before"` and `"shifting afterwards"`) for the user
- Also it allows the loss computation for `Bert2Bert` to be exactly the same as for `BartForConditionalGeneration` . Before this PR it was not exactly the same since Bert2Bert "cut" the first input_ids and the last input_ids respetively for `decoder_input_ids` and `labels`.
- It also arguably more readable as the user doesn't have to look into `modeling_bert.py` of `modeling_bart.py` to see how the loss is computed exactly
The cons of this approach are:
- It's slighly backwards breaking in a sense that with the new design the labels, input_ids are not cut anymore but shifted (think that's fine though since it's training that is changed here and not inference). Also, it's a good opportunity to update this blog post: https://huggingface.co/blog/warm-starting-encoder-decoder#warm-starting-the-encoder-decoder-model a bit to show the new feature
- We assume here that all `EncoderDecoderModel` combinations always use the "cross entropy / next token language model loss". So if a new model is published that uses a new special loss we probably would have to add some if else statements to the `EncoderDecoderModel`. In a sense we do lose flexibility since we can't define individual loss computations in the specific models anymore...However IMO a unified framework is more important here and so far all models that we know do use the "cross entropy / next token language model loss"
=> So overall I'm in favor of option 2) here.
Once we have merged this PR, @NielsRogge and I could work on making some nice notebooks and/or blog posts on how to fine-tune:
- `EncoderDecoderModel`
- `VisionEncoderDecoderModel`
- `SpeechEncoderDecoderModel`
and give an overview of all the possible tasks that can be covered with it (Image Captioning, Speech Translation, ...).
Also cc @ydshieh - If you're interested we could work on a nice blog post (to be added here: https://huggingface.co/blog) on how to leverage `VisionEncoderDecoder` for image captioning if you're interested :-)<|||||>
> Also cc @ydshieh - If you're interested we could work on a nice blog post (to be added here: https://huggingface.co/blog) on how to leverage `VisionEncoderDecoder` for image captioning if you're interested :-)
Yes, I am interested :) Thank you for the opportunity! <|||||>A minor question, do we still need to pass `labels` to the decoder here (`labels=labels, `)?
https://github.com/huggingface/transformers/blob/6880957810143e08c2e773639cc5a74841033115/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L461-L475
(I also like option 2, but would like to check if it works fine for TF models too :). I didn't check the recent change @Rocketknight1 made about loss computation for TF models.)<|||||>Great point, there's no need to pass `labels` to the decoder anymore if we go for option 2, as the loss is computed outside the decoder.
And yeah we need to make sure this also works for the other frameworks. Will ask @Rocketknight1 for his opinion.<|||||>@ydshieh The change to TF loss computation is mostly cosmetic/UX. It just changes `compile()` so that if the user doesn't specify a loss, we use a `dummy_loss` loss on the model's `loss` output, and the change in `train_step` just patches things so the loss values in the progress bar look okay. So hopefully that shouldn't cause any problems for any of this! |
transformers | 14,138 | closed | wrong cache_dir is used when tokenizer is trying to infer config_tokenizer_class | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.10.3
- Platform: Linux-5.4.0-88-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.9.0a0+df837d0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
## Information
Model I am using (Bert, XLNet ...): FlaubertTokenizer
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
I am only loading the tokenizer, and not even using it afterwards.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
Two different cache directories are used, when calling
```
FlaubertTokenizer.from_pretrained("flaubert/flaubert_base_uncased", cache_dir="my_cache_dir")
```
Most of the tokenizer files are loaded to `my_cache_dir`, as expected. However, there is one more model config file, which is downloaded to `/.cache/`, even though an explicit `cache_dir` is passed to `from_pretrained`. In my docker setup, I don't have permissions to write to `/.cache`, so this rightfully results in the following warning:
```
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'BertTokenizer'.
The class this function is called from is 'FlaubertTokenizer'.
```
I believe, this happens due to the following piece of code in [tokenization_utils](https://huggingface.co/transformers/_modules/transformers/tokenization_utils_base.html). In particular, the `from_pretrained()` call doesn't receive the `cache_dir`.
```
if config_tokenizer_class is None:
from .models.auto.configuration_auto import AutoConfig # tests_ignore
# Second attempt. If we have not yet found tokenizer_class, let's try to use the config.
try:
config = AutoConfig.from_pretrained(pretrained_model_name_or_path, use_auth_token=use_auth_token)
config_tokenizer_class = config.tokenizer_class
```
## To reproduce
```
from transformers import FlaubertTokenizer
FlaubertTokenizer.from_pretrained("flaubert/flaubert_base_uncased", cache_dir="./my_cache_dir")
```
Then check, that some files are downloaded to `./my_cache_dir`, and some other cache files are downloaded to another directory, which I believe, can be found by `pip cache dir` shell command (not sure here).
## Expected behavior
I expect all downloaded files to be placed to one rovided `cache_dir`.
There are a couple of workarounds for my case. The warning doesn't seem to be crucial, so I can just ignore it (which I don't want). Also, I can set the `TRANSFORMERS_CACHE` environmental variable.
But still I doubt that the current behaviour is an expected one.
| 10-25-2021 09:53:31 | 10-25-2021 09:53:31 | Thanks for opening an issue, this seems correct indeed! Would you like to open a PR to fix this?<|||||>Thanks for opening an issue and solving the problem, @vmaryasin! |
transformers | 14,137 | closed | ImportError: cannot import name 'TrOCRProcessor' | ## Environment info
- `transformers` version: 4.11.3
- Platform: Linux-5.3.0-51-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik please help me out reaching the correct contributor. Tagging @NielsRogge since it is related to vision.
## Information
Model I am using: TrOCR
The problem arises when using:
* [x] the official example scripts: (give details below)
I am trying to run the code sample given on the HuggingFace website [here](https://huggingface.co/microsoft/trocr-base-printed#how-to-use).
The tasks I am working on is:
* [x] my own task or dataset: (give details below)
OCR on black and white images with blurry text.
## To reproduce
Steps to reproduce the behavior:
1. Install the latest version of `transformers`
2. Run the code snippet provided [here](https://huggingface.co/microsoft/trocr-base-printed#how-to-use).
You should get the following error:
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-1-13a8ac45c2a8> in <module>
----> 1 from transformers import TrOCRProcessor, VisionEncoderDecoderModel
2 from PIL import Image
3 import requests
ImportError: cannot import name 'TrOCRProcessor'
```
## Expected behavior
I should be able to run the code example [here](https://huggingface.co/microsoft/trocr-base-printed#how-to-use) without getting any import errors.
| 10-25-2021 08:41:03 | 10-25-2021 08:41:03 | Thanks for your interest in TrOCR! For now, you should install Transformers from master:
`pip install git+https://github.com/huggingface/transformers.git`<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,136 | closed | Fix some writing issues in the docs | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
Fixes some typos and writing issues in the documentation.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-25-2021 01:34:42 | 10-25-2021 01:34:42 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.