repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
β | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 14,135 | closed | [ASR example] Can't reproduce the same WER with the same seed | ## Environment info
- `transformers` version: 4.12.0.dev0
- Platform: Linux
- Python version: 3.8.10
- PyTorch version (GPU?): 1.7.1+cu110
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten, @anton-l
## Information
I was trying the ASR example: https://github.com/huggingface/transformers/tree/master/examples/pytorch/speech-recognition
However, I found that even after setting the seed, I got two different WERs in the end. I'm not sure how I can make it reproducible?
Some more information: I used common voice 7.0 ug data, here are two testing results I got with the same seed=42:
{
"test_loss": 0.4873116910457611,
"test_runtime": 165.4426,
"test_samples": 2620,
"test_samples_per_second": 15.836,
"test_steps_per_second": 1.983,
"test_wer": 0.46173210049862035
}
{
"test_loss": 0.47727492451667786,
"test_runtime": 166.5077,
"test_samples": 2620,
"test_samples_per_second": 15.735,
"test_steps_per_second": 1.97,
"test_wer": 0.46052185699762793
}
Though they are very close, they are not exactly the same. When I run the code on smaller datasets, the variance is larger.
Thanks! | 10-25-2021 01:19:25 | 10-25-2021 01:19:25 | Hey @ZhangShiyue,
Thanks for the report! Hmm, we do call the function `set_seed(...)` [here](https://github.com/huggingface/transformers/blob/95bab53868a91b4809bd5281a72b5f326853e31f/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py#L332) which should make everything reproducible...do you train on single GPU? <|||||>yes, I noticed it. So it also surprised me when I got different results with the same seed.
Yes I train on a single GPU
thanks!<|||||>I will take a look to see whether I can make it fully reproducible on a tiny training run :-) I'll keep you posted!<|||||>Thank you!<|||||>Hey @ZhangShiyue,
sorry to answer only now. It's quite difficult to debug such reproducibility problems. I've created a dummy script and ran it a couple of times and it always gave me the same result. The script is:
```python
CUDA_VISIBLE_DEVICES="0" python run_speech_recognition_ctc.py \
--dataset_name="common_voice" \
--model_name_or_path="ntu-spml/distilhubert" \
--dataset_config_name="ab" \
--output_dir="./dummy" \
--overwrite_output_dir \
--num_train_epochs="3" \
--per_device_train_batch_size="4" \
--gradient_accumulation_steps="1" \
--learning_rate="5e-5" \
--warmup_steps="500" \
--evaluation_strategy="steps" \
--text_column_name="sentence" \
--save_steps="500" \
--eval_steps="500" \
--logging_steps="1" \
--layerdrop="0.0" \
--save_total_limit="1" \
--mask_time_prob="0.3" \
--mask_time_length="10" \
--mask_feature_prob="0.1" \
--mask_feature_length="64" \
--freeze_feature_extractor \
--chars_to_ignore , ? . ! - \; \: \" β % β β οΏ½ \
--fp16 \
--group_by_length \
--do_train --do_eval \
--gradient_checkpointing \
```
and I've gotten the following result three times in a row:
```
{'loss': 33.9851, 'learning_rate': 3.0000000000000004e-07, 'epoch': 1.17}
{'loss': 28.627, 'learning_rate': 4.0000000000000003e-07, 'epoch': 1.33}
{'loss': 46.3949, 'learning_rate': 5.000000000000001e-07, 'epoch': 1.5}
{'loss': 26.7014, 'learning_rate': 6.000000000000001e-07, 'epoch': 1.67}
{'loss': 21.1327, 'learning_rate': 7.000000000000001e-07, 'epoch': 1.83}
{'loss': 26.1967, 'learning_rate': 8.000000000000001e-07, 'epoch': 2.0}
{'loss': 37.1694, 'learning_rate': 9e-07, 'epoch': 2.17}
{'loss': 30.2389, 'learning_rate': 1.0000000000000002e-06, 'epoch': 2.33}
{'loss': 30.4334, 'learning_rate': 1.1e-06, 'epoch': 2.5}
{'loss': 31.5205, 'learning_rate': 1.2000000000000002e-06, 'epoch': 2.67}
{'loss': 28.4714, 'learning_rate': 1.3e-06, 'epoch': 2.83}
{'loss': 21.3841, 'learning_rate': 1.4000000000000001e-06, 'epoch': 3.0}
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 18/18 [00:03<00:00, 6.20it/s]
Training completed. Do not forget to share your model on huggingface.co/models =)
{'train_runtime': 3.6622, 'train_samples_per_second': 18.022, 'train_steps_per_second': 4.915, 'train_loss': 29.691171010335285, 'epoch': 3.0}
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 18/18 [00:03<00:00, 4.92it/s]
Saving model checkpoint to ./dummy
Configuration saved in ./dummy/config.json
Model weights saved in ./dummy/pytorch_model.bin
Configuration saved in ./dummy/preprocessor_config.json
***** train metrics *****
epoch = 3.0
train_loss = 29.6912
train_runtime = 0:00:03.66
train_samples = 22
train_samples_per_second = 18.022
train_steps_per_second = 4.915
12/10/2021 13:45:39 - INFO - __main__ - *** Evaluate ***
***** Running Evaluation *****
Num examples = 9
Batch size = 8
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:00<00:00, 61.92it/s]
***** eval metrics *****
epoch = 3.0
eval_loss = 23.3384
eval_runtime = 0:00:00.26
eval_samples = 9
eval_samples_per_second = 33.613
eval_steps_per_second = 7.47
eval_wer = 1.0
Dropping the following result as it does not have all the necessary fields:
{'dataset': {'name': 'COMMON_VOICE - AB', 'type': 'common_voice', 'args': 'Config: ab, Training split: train+validation, Eval split: test'}}
```
The loss values are also exactly the same.
My environment is one GPU (`GeForce RTX 3070 Mobile / Max-Q`) and the following library settings:
```
- `transformers` version: 4.14.0.dev0
- Platform: Linux-5.15.5-76051505-generic-x86_64-with-glibc2.33
- Python version: 3.9.7
- PyTorch version (GPU?): 1.10.0 (True)
- Tensorflow version (GPU?): 2.7.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.3.6 (cpu)
- Jax version: 0.2.25
- JaxLib version: 0.1.73
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
```<|||||>Could you maybe try to run the same script a couple of times from your side (it's very fast - just 20 seconds) to see if you can at least reproduce the results of this script? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,134 | closed | Fix rendering of examples version links | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
Fixes the Markdown rendering of version links in the examples README by adding a newline. Also, removes the unnecessary "find" in the next sentence.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-25-2021 00:11:21 | 10-25-2021 00:11:21 | |
transformers | 14,133 | closed | Added Beit model ouput class | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes https://discuss.huggingface.co/t/issue-in-the-documentation-of-transformers-for-biet/10977/2
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-24-2021 21:03:34 | 10-24-2021 21:03:34 | Thanks for your PR! To fix the code quality, you can run make fixup locally (which will run `make style` and `make quality` in sequence).<|||||>Hello,
I ran `make html SPHINXOPTS="-W -j 4"` in `docs` directory. This is the error I am getting
```
.virtualenvs/transformers_dev/lib/python3.7/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
Warning, treated as an error:
missing attribute forward in object transformers.TapasModel
Makefile:19: recipe for target 'html' failed
make: *** [html] Error 2
```
Note: I ran the above command because in `Circle CI` I saw this

<|||||>I've opened a PR on your branch [here](https://github.com/lumliolum/transformers/pull/1).<|||||>Thanks for adding this! |
transformers | 14,132 | closed | [Seq2Seq] (Byt5) zero loss | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: master
- Platform: win + linux
- Python version: 3.8
- PyTorch version (GPU?): yes
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): @patrickvonplaten, @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj @patrickvonplaten
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten
- Tokenizers: @LysandreJik
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patrickvonplaten @patil-suraj
## Information
Model I am using (Bert, XLNet ...):
byt5
for comparison, e.g coding mistake on my side I used other seq2seq models like t5 too, these models are working as expected
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce

Steps to reproduce the behavior:
1. running official seq2seq example script with trainer
2. train using byt5 models, any size with fp16 pytorch backend
3. see loss is going to zero after some steps (500 maximum to zero loss in my experiments)
4. do inference and see that every output is ''
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
training the model with reasonable loss and generating good text
<!-- A clear and concise description of what you would expect to happen. -->
| 10-24-2021 12:46:02 | 10-24-2021 12:46:02 | Not sure if ByT5 supports fp16 training, cc @patrickvonplaten <|||||>Hi!
I am not sure if we have tested ByT5 with seq2seq scripts yet. Which script are you using, `run_translation.py` or `run_summarizaton.py` ? Would be nice if you could post a snippet to reproduce this.
Also, note that T5 (and ByT5 as well) models are trained with `bf16`, which may or may not work with `fp16`. See this [discussion](https://discuss.huggingface.co/t/mixed-precision-for-bfloat16-pretrained-models/5315) or forum
However, this usually results in `nan` losses which isn't the case here. So I won't be sure without looking at the command that you are using.<|||||>removing the --fp16 argument fixes it, when using run translation script |
transformers | 14,131 | closed | Replace assert of data/data_collator.py by ValueError | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Related to issue #12789.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
I'm tagging @LysandreJik and @patrickvonplaten for the review of this PR.
Thanks for any advice/modification needed on the error message!
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-24-2021 12:12:53 | 10-24-2021 12:12:53 | |
transformers | 14,130 | closed | DebertaForMultipleChoice | # π Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
`Deberta` is a sequence classification model. Currently this library has `DebertaV2ForSequenceClassification`, `DebertaV2ForTokenClassification` and ` DebertaV2ForQuestionAnswering`. It makes sense to add `DebertaV2ForMultipleChoice` as well given that all other sequence classification models have this method.
## Contribution
I can send a PR. | 10-23-2021 19:21:24 | 10-23-2021 19:21:24 | Can we do something like this ?
```
class DebertaPooler(nn.Module):
def __init__(self, config):
super().__init__()
self.dense = nn.Linear(config.hidden_size, config.hidden_size)
self.activation = nn.Tanh()
def forward(self, hidden_states):
# We "pool" the model by simply taking the hidden state corresponding
# to the first token.
first_token_tensor = hidden_states[:, 0]
pooled_output = self.dense(first_token_tensor)
pooled_output = self.activation(pooled_output)
return pooled_output
class DebertaV2ForMultipleChoice(DebertaV2PreTrainedModel):
_keys_ignore_on_load_unexpected = [r"pooler"]
def __init__(self, config):
super().__init__(config)
self.deberta = DebertaV2Model(config)
classifier_dropout = (
config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob
)
self.pooler = DebertaPooler(config)
self.dropout = nn.Dropout(classifier_dropout)
self.classifier = nn.Linear(config.hidden_size, 1)
self.init_weights()
@add_start_docstrings_to_model_forward(DEBERTA_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
@add_code_sample_docstrings(
processor_class=_TOKENIZER_FOR_DOC,
checkpoint=_CHECKPOINT_FOR_DOC,
output_type=MultipleChoiceModelOutput,
config_class=_CONFIG_FOR_DOC,
)
def forward(
self,
input_ids=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
inputs_embeds=None,
labels=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
):
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
num_choices = input_ids.shape[1] if input_ids is not None else inputs_embeds.shape[1]
input_ids = input_ids.view(-1, input_ids.size(-1)) if input_ids is not None else None
attention_mask = attention_mask.view(-1, attention_mask.size(-1)) if attention_mask is not None else None
token_type_ids = token_type_ids.view(-1, token_type_ids.size(-1)) if token_type_ids is not None else None
position_ids = position_ids.view(-1, position_ids.size(-1)) if position_ids is not None else None
inputs_embeds = (
inputs_embeds.view(-1, inputs_embeds.size(-2), inputs_embeds.size(-1))
if inputs_embeds is not None
else None
)
outputs = self.deberta(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
inputs_embeds=inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
pooled_output = self.pooler(outputs)
pooled_output = self.dropout(pooled_output)
logits = self.classifier(pooled_output)
reshaped_logits = logits.view(-1, num_classes)
loss = None
if labels is not None:
loss_fct = CrossEntropyLoss()
loss = loss_fct(reshaped_logits, labels)
if not return_dict:
output = (reshaped_logits,) + outputs[2:]
return ((loss,) + output) if loss is not None else output
return MultipleChoiceModelOutput(
loss=loss,
logits=reshaped_logits,
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
)
```<|||||>I am not sure how to pool the output from Deberta. Different models use different mechanisms for pooling, so if someone can confirm, this should be straightforward.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,129 | open | [Flax] Fix eval and data_args usage in streaming example | # What does this PR do?
This PR fixes the evaluation loop in `run_mlm_flax_stream.py`. Current behavior didn't update the correct variable, which leads to data leakage during evaluation.
It also takes the opportunity to improve some `DataTrainingArguments` usages.
-------
It's a draft PR because there is an open improvement that could be made: the script splits train-eval based solely in `data_args.{dataset_name,num_eval_samples}`, but also accepts unused args `train_file, validation_file, train_ref_file, validation_ref_file, validation_split_percentage`. Other data args that are unused: `pad_to_max_length`, `line_by_line`.
My suggestion would be to remove all these unused args. May I proceed with that?
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Request section?
- [x] Did you make sure to update the documentation with your changes?
The script is mentioned in [`jax-projects/dataset-streaming/README`](https://github.com/huggingface/transformers/blob/95bab53868a91b4809bd5281a72b5f326853e31f/examples/research_projects/jax-projects/dataset-streaming/README.md#train-model), but no changes are required.
## Who can review?
@patrickvonplaten
| 10-23-2021 15:25:29 | 10-23-2021 15:25:29 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@patil-suraj could you take a look here? :-) |
transformers | 14,128 | closed | Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector. | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.11.3
- Platform: Linux-5.4.0-84-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.3.4 (cpu)
- Jax version: 0.2.20
- JaxLib version: 0.1.71
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes (Multi GPU setting)
### Who can help
@patrickvonplaten , @patil-suraj
## Information
Model I am using LED:
The problem arises when using:
the official example scripts
[Finetuneing Longformer Encoder-Decoder (LED)](https://colab.research.google.com/drive/12INTTR6n64TzS4RrXZxMSXfrOd9Xzamo?usp=sharing
)
## To reproduce
Steps to reproduce the behavior:
1. Execute the above script on 2 gpu setting.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
../lib/python3.8/site-packages/torch/nn/parallel/_functions.py:65: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.
warnings.warn('Was asked to gather along dimension 0, but all
```
It seems to be a warning, but the program stops after that & doesn't execute further.
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The program should complete the execution of epochs.
| 10-23-2021 10:00:07 | 10-23-2021 10:00:07 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @rajgar114,
Thanks for the issue! The above script is quite old and we don't actively maintain such scripts. However, I'm happy trying to debug the problem! As a start could you write a minimal executable code example that reproduces the error warning using the most up to date version of `transformers`? E.g. it's probably just the part that includes the `generate()` method that is responsible for the warning / error<|||||>Hi @patrickvonplaten,
Here is the minimum executable code that produces the issue:
```python
from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments
from transformers import AutoModelForSeq2SeqLM
from transformers import AutoTokenizer
from datasets import load_dataset
from datasets import load_metric
from datasets import Dataset
import pandas as pd
import datasets
dataList = [['How are you ?', 'Tum kaise ho ?'],
['I am fine.', 'Main theek hoon']]
trainDataFrame = pd.DataFrame(dataList, columns = ['English', 'Hinglish'])
print(trainDataFrame.head(2))
trainDataset = Dataset.from_pandas(trainDataFrame[["English", "Hinglish"]])
modelCheckpoint = 'allenai/led-base-16384'
tokenizer = AutoTokenizer.from_pretrained(modelCheckpoint)
maxInputLength = 1024
maxOutputLength = 1024
batchSize = 2
def processDataToModelInputs(batch):
inputs = tokenizer(
batch['English'],
padding = 'max_length',
truncation = True,
max_length = maxInputLength,
)
outputs = tokenizer(
batch['Hinglish'],
padding = 'max_length',
truncation = True,
max_length = maxOutputLength,
)
batch["input_ids"] = inputs.input_ids
batch["attention_mask"] = inputs.attention_mask
#Create 0 global_attention_mask lists
batch["global_attention_mask"] = len(batch["input_ids"]) * [
[0 for _ in range(len(batch["input_ids"][0]))]
]
#Since above lists are references, the following line changes the 0 index for all samples
batch["global_attention_mask"][0][0] = 1
batch["labels"] = outputs.input_ids
# We have to make sure that the PAD token is ignored
batch["labels"] = [
[-100 if token == tokenizer.pad_token_id else token for token in labels]
for labels in batch["labels"]
]
return batch
trainDataset = trainDataset.map(
processDataToModelInputs,
batched = True,
batch_size = batchSize,
remove_columns = ['English', 'Hinglish']
)
print(trainDataset)
trainDataset.set_format(
type = "torch",
columns = ["input_ids", "attention_mask", "global_attention_mask", "labels"]
)
led = AutoModelForSeq2SeqLM.from_pretrained(modelCheckpoint, gradient_checkpointing = False, use_cache = False)
led.config.num_beams = 3
led.config.max_length = 1024
led.config.min_length = 2
led.config.length_penalty = 2.0
led.config.early_stopping = True
led.config.no_repeat_ngram_size = 3
rouge = load_metric("rouge")
def computeMetrics(pred):
#Exact Count Matches
labels_ids = pred.label_ids
pred_ids = pred.predictions
pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens = True)
labels_ids[labels_ids == -100] = tokenizer.pad_token_id
label_str = tokenizer.batch_decode(labels_ids, skip_special_tokens = True)
rouge_output = rouge.compute(
predictions = pred_str, references = label_str, rouge_types = ["rouge2"]
)["rouge2"].mid
return {
"rouge2_precision": round(rouge_output.precision, 4),
"rouge2_recall": round(rouge_output.recall, 4),
"rouge2_fmeasure": round(rouge_output.fmeasure, 4)
}
checkPointingPath = "/home/....."
trainingArgs = Seq2SeqTrainingArguments(
output_dir = checkPointingPath + "Outputs/",
overwrite_output_dir = True,
do_train = True,
per_device_train_batch_size = batchSize,
gradient_accumulation_steps = 2,
num_train_epochs = 2,
logging_strategy = "epoch",
save_strategy = "epoch",
fp16 = True,
disable_tqdm = False,
predict_with_generate = True,
report_to = "none",
)
trainer = Seq2SeqTrainer(
model = led,
tokenizer = tokenizer,
args = trainingArgs,
compute_metrics = computeMetrics,
train_dataset = trainDataset,
)
finalOutput = trainer.train()
file = open(checkPointingPath + 'finalOutput.txt', 'w')
file.write(str(finalOutput))
file.close()
```
I feel may be it's linked to:
[Returning of scalars in Multi-GPU Data Parallel Setting.](https://discuss.pytorch.org/t/how-to-fix-gathering-dim-0-warning-in-multi-gpu-dataparallel-setting/41733)<|||||>Hey @rajgar114,
I just checked and your example works fine for me on a single GPU. For multi-GPU I think one has to make use of DDP instead of DP which is also what is recommend by PyTorch now (see first warning here: https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html#torch.nn.DataParallel)
<|||||>To continue, could you maybe do the following:
a) Verify that the code works with single-GPU (set `CUDA_VISIBLE_DEVICES="0"`).
b) Try to run the script with DDP. The Trainer supports DDP out-of-the-box. See: https://github.com/huggingface/transformers/tree/master/examples/pytorch#distributed-training-and-mixed-precision
Let me know if you still run into problems :-)<|||||>Hi @patrickvonplaten,
a) I have used the following option to use only the single-GPU as suggested by you:
`os.environ["CUDA_VISIBLE_DEVICES"]="0"`
Using this, the code was working fine and the model was fine-tuned successfully.
b) And for the DDP setting, don't we have a easier way like to pass any argument or something in the Trainer API so that I can able to directly execute above code in jupyter notebook only without the command line stuff.
<|||||>for b) DDP sadly relies on Python's multiprocessing so that the script has to be launched by `n` processes. So those are the best docs we have: https://github.com/huggingface/transformers/tree/master/examples/pytorch#distributed-training-and-mixed-precision<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Sorry, could anyone clarify what should be done when using the trainer?
I have the same problem as [rajgar114](https://github.com/rajgar114), but I am using the Huggingface Trainer and the program stops and does not execute further. |
transformers | 14,127 | closed | Can deepspeed and gradient checkpointing be used at the same time? | I'm finetuning roberta-large for some downstream tasks, and i'm wondering if deepspeed and gradient checkpointing can be used at the same for much more GPU memory reduction? Is there any conflict between these two? | 10-23-2021 09:31:05 | 10-23-2021 09:31:05 | Yes.
See the guide: https://huggingface.co/transformers/master/performance.html |
transformers | 14,126 | closed | Fix some typos in the docs | # What does this PR do?
Fixes some typos in the documentations.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-23-2021 00:06:16 | 10-23-2021 00:06:16 | |
transformers | 14,125 | closed | Pipeline seems slower in 4.11+ | Hello! When I upgraded Transformers, I got a massive slowdown. Might be related to the new DataLoader used in Pipeline.
Happy to help!
Cheers,
## Environment info
Environment
- `transformers` version: 4.12.0.dev0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.12
- PyTorch version (GPU?): 1.9.1 (False)
- Tensorflow version (GPU?): 2.6.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.3.5 (cpu)
- Jax version: 0.2.24
- JaxLib version: 0.1.73
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
Library:
- Pipelines: @Narsil
Model I am using (Bert, XLNet ...): DistilBert, but I suspect this is for all Pipeline.
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. I use the following script to predict on some random sentences:
```python
from transformers import (
AutoModelForSequenceClassification,
AutoTokenizer,
TextClassificationPipeline,
)
def get_pipeline():
name = "distilbert-base-uncased-finetuned-sst-2-english"
model = AutoModelForSequenceClassification.from_pretrained(name)
tokenizer = AutoTokenizer.from_pretrained(name)
return TextClassificationPipeline(tokenizer=tokenizer, model=model)
sentence = ["hello", "goodbye"] * 100
model = get_pipeline()
```
2. The results that I get are widely different from Transformers 4.10 vs 4.11+
Version | Command | Time
-- | -- | --
HF 4.12.0.dev0 | %timeit -n 3 model(sentence) | Does not complete after 10 minutes.
HF 4.12.0.dev0 | %timeit -n 3 model(sentence, num_workers=0) | 4.67 s Β± 153 ms per loop
HF 4.10.3 | %timeit -n 3 model(sentence) | 575 ms Β± 10.8 ms per loop
HF 4.10.3 | %timeit -n 3 model(sentence, num_workers=0) | 500 ms Β± 3.01 ms per loop
## Expected behavior
I would expect the same performance if possible, or a way to bypass Pytorch DataLoader.
| 10-22-2021 20:00:39 | 10-22-2021 20:00:39 | Hi @Dref360 ,
First of all, thanks for the script and benchmarks, very helpful.
Your results are entirely correct and are reproducible.
**Short answer**: https://github.com/huggingface/transformers/pull/13724 this PR should solve your specific use case with `pipeline(sentence, batch_size=100)`.
**Long answer**:
You example is slightly odd, by being a single token repeated 200 times, so batching yields better results than non batching. If you use longer sentences, you can get better or worse performance:
`sentence = ["hello there this is a test" * 20, "goodbye"] * 100` for instance takes
12s on 4.10.3
8s on 4.11+
`sentence = ["hello there this is a test" * 20, "goodbye " * 10] * 100` for instance takes
11s on 4.10.3
11s on 4.11+
That somehow seems to average out on random strings, leading to closer performance (in our internal testing) which is why we're NOT batching by default anymore. The biggest batching place would be on GPU (not your case) but the 4.10 performance on GPU was pretty bad anyway because the pipeline API didn't allow for streaming properly.
This example is the perfect example where batching yields vastly faster results, but it might not be representative of other workloads (just a caveat for readers, always measure performance on your own models/data to make sure what works best)
On longer sequences, the matrix multiplications get larger, and batching does not allow the CPU to get better throughput than without (GPU get the benefit for larger payloads).
For core maintainers: @LysandreJik , @sgugger with the propose PR for batch support on pipelines we can also take the opportunity to use `batch_size = len(input_list)` when the input is a list to get back previous behavior (on pipelines that did do batching, ones that didn't might start to fail because some models might not have padding (like `text-generation` with `gpt2`).
We could also do like for overall inputs/outputs and have some way for pipelines to express if they batch by default or not and be as backward compatible as possible in terms of performance. I am unsure it's worth it (as mentionned in this comment, performance might have increased on other models/data) , but definitely an option.
Definitely something that was overlooked when I observed the similar performance, I didn't look at these kind of inputs where it does make a difference.
<|||||>Ah! Thank you for the quick and very detailled response.
My usecase is mostly small sentences so that must be why we saw such a massive slowdown.
Thank you for your help! We will wait for this PR to be merged :)<|||||>@Narsil in your comment (the long answer) you mention GPU performance is poor for transformer pipelines for previous versions (4.10 or earlier). I'm currently using 4.12.0 and observe that the GPU isn't fully utilized . I'm using a sentiment analysis hugging face model: https://huggingface.co/yiyanghkust/finbert-tone with the following setup:
Environment:
Hardware:
Azure Node Type: Standard_NC8as_T4_v3
- 8 cores
- 56 GB memory
- 1 Tesla T4 GPU
Software:
- pytorch: 1.9.0+cu111
- python: 3.8.10
- pytorch_pretrained_bert: 0.6.2
- transformers: 4.12.0
- gpustat: 0.6.0
I'm running the following code:
```
from transformers import BertTokenizer, BertForSequenceClassification
from transformers import pipeline
finbert = BertForSequenceClassification.from_pretrained('yiyanghkust/finbert-tone',num_labels=3)
tokenizer = BertTokenizer.from_pretrained('yiyanghkust/finbert-tone')
nlp = pipeline("sentiment-analysis", model=finbert, tokenizer=tokenizer, device=0)
sentences = ["there is a shortage of capital, and we need extra financing",
"growth is strong and we have plenty of liquidity",
"there are doubts about our finances",
"profits are flat"]*1000
results = nlp(sentences)
print(results[:4])
```
Running gpustat (https://pypi.org/project/gpustat/) on the node reports the following while the above code is running (which does so for about 35 secs) shows the following:
1026-105747-9wr7sbr7-10-139-64-13 Fri Oct 29 09:32:34 2021 450.80.02
[0] Tesla T4 | 49'C, 39 % | 1916 / 16127 MB |
Here you see that only 39% of the GPU is used. Is there a reason why it isn't near 100%?
For comparison, the same model based on pytorch_pretrained_bert only which can be found here: https://github.com/yya518/FinBERT/blob/master/FinBert%20Model%20Example.ipynb
using the same input of sentences, it performs significantly faster (approx 5 seconds). Here is the gpu usage:
1026-105747-9wr7sbr7-10-139-64-17 Fri Oct 29 16:42:58 2021 450.80.02
[0] Tesla T4 | 45'C, 96 % | 2258 / 16127 MB |
With this approach the gpu uasge is close to full capacity. Although I only tested this model, I suspect inference on a GPU using other tranformer models will also under utilize the GPU.
Will the ability to set the batch size greater than 1 via the PR help with this? I see that the PR #13724 has been merged. When can we expect the next release?
Thanks!
<|||||>Hi @alwayscurious ,
1. The notebook linked does not have `* 1000`, effectively killing the measuring, is that just an omission or does it change the results ? The following assumes it actually modifies the results.
2. In my modified script of your test (I used the same model as the pipelines example, with `* 1000` added back), I get 100% GPU usage, but it takes 3mn the run the full thing while it takes 35s on the pipeline example. GPU usage is not everything here :).
3. You are perfectly correct that the GPU is underused with the pipeline example, and we can push it on master transformers with `pipeline(sentences, batch_size=64)`. Increasing the size of the batch does yield improved speed pretty fast and at some point it's not worth putting bigger batches (when you saturate the GPU basically). Then the full thing runs under 5s on my home GTX 970.
You are reading 100% GPU usage but much lower speed on your colab example because all your examples are padded to `512` max length, so effectively the examples are super large for the GPU (keeping it busy) but it's mostly doing useless work (hence 3mn instead of 35s)
The ~50% GPU utilization of the first example, is because the example+model is a bit small so no all the GPU is required to run, meaning part of the GPU is idle. However it's still running faster that the "old" example, because it's not wasting cyles on the padded tokens. If I remove the padding I fall back on roughly `~35s` mentioned above. On larger models there would still probably be a difference linked to how the data is actually fed to the GPU but out of scope for this discussion.
By adding `pipeline(sentences, batch_size=64)` I am getting `5s` runtime of the inference.
On a T4, you might be able to push the size of the batch even more, however I always tell users to be careful, running on mock data and real data is likely to be different, by adding more to the batch, you risk getting into OOM errors on live data that might be `max_seq_len` long, then the whole batch can be bigger. Even before OOM, if the data is highly unregular in terms of size the batching can hinder performance instead of helping it. Just like in the notebook, it's filling your batch of pad_tokens.
See this for the discussion: https://github.com/huggingface/transformers/blob/master/docs/source/main_classes/pipelines.rst.
Another knob you can turn is `pipelines(..., num_workers=10)` which is the number of threads used to feed the data to the GPU (it's DataLoader's argument) and might also help depending on your model/data configuration (rule of thumb is num_workers=number of CPU cores).
Did I omit anything in this analysis ?<|||||>I dont see the issue on master. Thank you!<|||||>@Narsil thanks for your insightful reply! Indeed I also observed the 35 seconds with transformers vs approx 3 minutes with "old code" performance you mention after fixing a bug in the original "old" code (I was using a previous version of the notebook and realized it had a bug compared to the latest one in the link). I used the latest release of transformers (4.12.2).
Regarding the input setup you are correct to add 1000 and I forgot to add that to the notebook (only included it in the code snippet). From the release notes of 4.12.2, I see the batch_size is included: https://github.com/huggingface/transformers/compare/v4.12.2...master. Can I use this version or do I need to build the transformers package manually from master? I set the batch size to 64 but continue to see aprox. 35 seconds for inference compared to approx. 5 seconds that you observe on your GTX 970. I'll setup a colab notebook with GPU runtime to verify.
Thanks again!<|||||>@Narsil, triggered by @Dref360 comment, I realized using the following:
`!pip install git+https://github.com/huggingface/transformers.git`
creates a package directly from the latest commits to master. I verified the performance you observed with batch size of 64 (approx 7s on a K80 GPU). I included a link to the notebook for reference.
[Huggingface FinBertTone Model performance on GPU](https://colab.research.google.com/drive/1jpzuRAUyBNl3Gp3pOroK9gxaPUC8zjs-?usp=sharing)
Thanks again for your help and the reference! :) <|||||>@alwayscurious glad to be of help.
Again, batch_size will depend on data + model + hardware, so try to keep track of some measure if possible (GPU utilization is the easy one, but the amount of padding is another one, measuring everything will slow you down so .... :)).
Enabling automated batch_size is something we would like to enable, but it's quite tricky, and maybe not worth it. At least now you are in control.<|||||>I'll close the issue now that it is merged on master.
Cheer |
transformers | 14,124 | closed | Fix lazy init to stop hiding errors in import | # What does this PR do?
## The problem
As was pointed out in #13007 and reported more recently in the internal slack, the lazy init used in Transformers hide the error messages one get at import time. A reproducer showed is:
```bash
pyenv virtualenv 3.8.9 test-bug
pyenv activate 3.8.9
pip install datasets huggingface_hub
pip install torch transformers==4.9.2
python -c "from transformers import pipeline"
```
This will only return `ImportError: cannot import name 'pipeline' from 'transformers'` with no other information. The underlying error comes from a mismatch between Datasets and huggingface-hub in this env (for all details, pipeline tries to import AutoModel which tries to import rag which tries to import Dataset), but Transformers hides it. If one does
```bash
python -c "from datasets import Dataset"
```
one will see the full error message
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/sgugger/.pyenv/versions/test-bug/lib/python3.8/site-packages/datasets/__init__.py", line 37, in <module>
from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
File "/home/sgugger/.pyenv/versions/test-bug/lib/python3.8/site-packages/datasets/builder.py", line 44, in <module>
from .data_files import DataFilesDict, _sanitize_patterns
File "/home/sgugger/.pyenv/versions/test-bug/lib/python3.8/site-packages/datasets/data_files.py", line 120, in <module>
dataset_info: huggingface_hub.hf_api.DatasetInfo,
AttributeError: module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo'
```
That particular problem is fixed in later versions of Transformers since we decoupled the auto module from the others, but it's still a general bug in the lazy init.
## The solution
The underlying problem comes from the fact that there are some errors being silently ignored in the import machinery of Python. @aphedges gave a preliminary report in #13007 which helped me pinpoint the problem to the `_get_module` private function which errors when we try to import the `pipelines` module in the env mentioned above, but that error is then discarded and no good error message is sent.
Changing that error to a `RuntimeError` sovles the issue, and with the modifications suggested in this PR, the line
```bash
python -c "from transformers import pipeline"
```
then gives the more informative
```
Traceback (most recent call last):
File "/home/sgugger/git/transformers/src/transformers/file_utils.py", line 1996, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/home/sgugger/.pyenv/versions/3.8.9/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/sgugger/git/transformers/src/transformers/pipelines/__init__.py", line 33, in <module>
from .automatic_speech_recognition import AutomaticSpeechRecognitionPipeline
File "/home/sgugger/git/transformers/src/transformers/pipelines/automatic_speech_recognition.py", line 20, in <module>
from .base import Pipeline
File "/home/sgugger/git/transformers/src/transformers/pipelines/base.py", line 43, in <module>
from ..models.auto.modeling_auto import AutoModel
File "/home/sgugger/git/transformers/src/transformers/models/auto/modeling_auto.py", line 230, in <module>
from ..rag.modeling_rag import ( # noqa: F401 - need to import all RagModels to be in globals() function
File "/home/sgugger/git/transformers/src/transformers/models/rag/modeling_rag.py", line 30, in <module>
from .retrieval_rag import RagRetriever
File "/home/sgugger/git/transformers/src/transformers/models/rag/retrieval_rag.py", line 33, in <module>
from datasets import Dataset, load_dataset, load_from_disk
File "/home/sgugger/.pyenv/versions/test-bug/lib/python3.8/site-packages/datasets/__init__.py", line 37, in <module>
from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
File "/home/sgugger/.pyenv/versions/test-bug/lib/python3.8/site-packages/datasets/builder.py", line 44, in <module>
from .data_files import DataFilesDict, _sanitize_patterns
File "/home/sgugger/.pyenv/versions/test-bug/lib/python3.8/site-packages/datasets/data_files.py", line 120, in <module>
dataset_info: huggingface_hub.hf_api.DatasetInfo,
AttributeError: module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "<frozen importlib._bootstrap>", line 1039, in _handle_fromlist
File "/home/sgugger/git/transformers/src/transformers/file_utils.py", line 1985, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/home/sgugger/git/transformers/src/transformers/file_utils.py", line 1998, in _get_module
raise RuntimeError(f"Failed to import {module_name} because of the following error (look up to see its traceback):\n{e}") from e
RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback):
module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo'
```
Fixes #13007 | 10-22-2021 17:33:42 | 10-22-2021 17:33:42 | @sgugger, thank you very much for creating a fix to this issue! I'm glad my write-up in #13007 helped. |
transformers | 14,123 | closed | make test fails on `tests/test_benchmark_tf.py` | ## Environment inf
- `transformers` version: 4.12.0.dev0
- Platform: macOS-11.6-x86_64-i386-64bit
- Python version: 3.8.2
- PyTorch version (GPU?): 1.9.1 (False)
- Tensorflow version (GPU?): 2.6.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.3.5 (cpu)
- Jax version: 0.2.24
- JaxLib version: 0.1.73
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@Rocketknight1
## Information
I cloned the repo and set up the environment installing `transformers` and `datasets`. Then, when I run the tests most of the tests in `tests/test_benchmark.py` fail.
The problem arises when using:
* [X ] the official example scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Clean clone of `transformers` and `datasets`
2. Install all dependencies for both repos
3. Run tests
```
$ git clone https://github.com/huggingface/transformers
$ git clone https://github.com/huggingface/datasets
$ cd transformers
$ pip install -e ".[dev]"
$ cd ../datasets
$ pip install -e
$ cd ../transformers
$ make test
```
This is the trace:
```
=================================== FAILURES ===================================
_________ TFBenchmarkTest.test_inference_encoder_decoder_with_configs __________
[gw3] darwin -- Python 3.8.2 /Users/sergiov/PycharmProjects/transformers/venv/bin/python
self = <tests.test_benchmark_tf.TFBenchmarkTest testMethod=test_inference_encoder_decoder_with_configs>
def test_inference_encoder_decoder_with_configs(self):
MODEL_ID = "patrickvonplaten/t5-tiny-random"
config = AutoConfig.from_pretrained(MODEL_ID)
benchmark_args = TensorFlowBenchmarkArguments(
models=[MODEL_ID],
training=False,
inference=True,
sequence_lengths=[8],
batch_sizes=[1],
multi_process=False,
)
benchmark = TensorFlowBenchmark(benchmark_args, configs=[config])
> results = benchmark.run()
tests/test_benchmark_tf.py:161:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/benchmark/benchmark_utils.py:707: in run
memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
src/transformers/benchmark/benchmark_utils.py:676: in inference_memory
return separate_process_wrapper_fn(self._inference_memory, self.args.do_multi_processing)(*args, **kwargs)
src/transformers/benchmark/benchmark_tf.py:112: in _inference_memory
return self._measure_memory(_inference)
src/transformers/benchmark/benchmark_tf.py:282: in _measure_memory
memory_bytes = measure_peak_memory_cpu(func)
src/transformers/benchmark/benchmark_utils.py:291: in measure_peak_memory_cpu
mem_process.start()
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py:121: in start
self._popen = self._Popen(self)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:224: in _Popen
return _default_context.get_context().Process._Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:283: in _Popen
return Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:32: in __init__
super().__init__(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py:19: in __init__
self._launch(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:47: in _launch
reduction.dump(process_obj, fp)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <MemoryMeasureProcess name='MemoryMeasureProcess-1' parent=10349 initial>
file = <_io.BytesIO object at 0x1c3958360>, protocol = None
def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
> ForkingPickler(file, protocol).dump(obj)
E AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py:60: AttributeError
__________ BenchmarkTest.test_inference_encoder_decoder_with_configs ___________
[gw2] darwin -- Python 3.8.2 /Users/sergiov/PycharmProjects/transformers/venv/bin/python
self = <tests.test_benchmark.BenchmarkTest testMethod=test_inference_encoder_decoder_with_configs>
def test_inference_encoder_decoder_with_configs(self):
MODEL_ID = "sshleifer/tinier_bart"
config = AutoConfig.from_pretrained(MODEL_ID)
benchmark_args = PyTorchBenchmarkArguments(
models=[MODEL_ID],
training=False,
inference=True,
sequence_lengths=[8],
batch_sizes=[1],
multi_process=False,
)
benchmark = PyTorchBenchmark(benchmark_args, configs=[config])
> results = benchmark.run()
tests/test_benchmark.py:178:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/benchmark/benchmark_utils.py:707: in run
memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
src/transformers/benchmark/benchmark_utils.py:676: in inference_memory
return separate_process_wrapper_fn(self._inference_memory, self.args.do_multi_processing)(*args, **kwargs)
src/transformers/benchmark/benchmark.py:69: in _inference_memory
return self._measure_memory(_inference)
src/transformers/benchmark/benchmark.py:258: in _measure_memory
memory_bytes = measure_peak_memory_cpu(func)
src/transformers/benchmark/benchmark_utils.py:291: in measure_peak_memory_cpu
mem_process.start()
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py:121: in start
self._popen = self._Popen(self)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:224: in _Popen
return _default_context.get_context().Process._Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:283: in _Popen
return Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:32: in __init__
super().__init__(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py:19: in __init__
self._launch(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:47: in _launch
reduction.dump(process_obj, fp)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <MemoryMeasureProcess name='MemoryMeasureProcess-1' parent=10347 initial>
file = <_io.BytesIO object at 0x1c312c540>, protocol = None
def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
> ForkingPickler(file, protocol).dump(obj)
E AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py:60: AttributeError
___________________ BenchmarkTest.test_inference_no_configs ____________________
[gw2] darwin -- Python 3.8.2 /Users/sergiov/PycharmProjects/transformers/venv/bin/python
self = <tests.test_benchmark.BenchmarkTest testMethod=test_inference_no_configs>
def test_inference_no_configs(self):
MODEL_ID = "sshleifer/tiny-gpt2"
benchmark_args = PyTorchBenchmarkArguments(
models=[MODEL_ID],
training=False,
inference=True,
sequence_lengths=[8],
batch_sizes=[1],
multi_process=False,
)
benchmark = PyTorchBenchmark(benchmark_args)
> results = benchmark.run()
tests/test_benchmark.py:47:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/benchmark/benchmark_utils.py:707: in run
memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
src/transformers/benchmark/benchmark_utils.py:676: in inference_memory
return separate_process_wrapper_fn(self._inference_memory, self.args.do_multi_processing)(*args, **kwargs)
src/transformers/benchmark/benchmark.py:69: in _inference_memory
return self._measure_memory(_inference)
src/transformers/benchmark/benchmark.py:258: in _measure_memory
memory_bytes = measure_peak_memory_cpu(func)
src/transformers/benchmark/benchmark_utils.py:291: in measure_peak_memory_cpu
mem_process.start()
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py:121: in start
self._popen = self._Popen(self)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:224: in _Popen
return _default_context.get_context().Process._Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:283: in _Popen
return Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:32: in __init__
super().__init__(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py:19: in __init__
self._launch(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:47: in _launch
reduction.dump(process_obj, fp)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <MemoryMeasureProcess name='MemoryMeasureProcess-2' parent=10347 initial>
file = <_io.BytesIO object at 0x1c30a6770>, protocol = None
def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
> ForkingPickler(file, protocol).dump(obj)
E AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py:60: AttributeError
_______________ TFBenchmarkTest.test_inference_no_configs_eager ________________
[gw3] darwin -- Python 3.8.2 /Users/sergiov/PycharmProjects/transformers/venv/bin/python
self = <tests.test_benchmark_tf.TFBenchmarkTest testMethod=test_inference_no_configs_eager>
def test_inference_no_configs_eager(self):
MODEL_ID = "sshleifer/tiny-gpt2"
benchmark_args = TensorFlowBenchmarkArguments(
models=[MODEL_ID],
training=False,
inference=True,
sequence_lengths=[8],
batch_sizes=[1],
eager_mode=True,
multi_process=False,
)
benchmark = TensorFlowBenchmark(benchmark_args)
> results = benchmark.run()
tests/test_benchmark_tf.py:50:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/benchmark/benchmark_utils.py:707: in run
memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
src/transformers/benchmark/benchmark_utils.py:676: in inference_memory
return separate_process_wrapper_fn(self._inference_memory, self.args.do_multi_processing)(*args, **kwargs)
src/transformers/benchmark/benchmark_tf.py:112: in _inference_memory
return self._measure_memory(_inference)
src/transformers/benchmark/benchmark_tf.py:282: in _measure_memory
memory_bytes = measure_peak_memory_cpu(func)
src/transformers/benchmark/benchmark_utils.py:291: in measure_peak_memory_cpu
mem_process.start()
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py:121: in start
self._popen = self._Popen(self)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:224: in _Popen
return _default_context.get_context().Process._Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:283: in _Popen
return Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:32: in __init__
super().__init__(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py:19: in __init__
self._launch(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:47: in _launch
reduction.dump(process_obj, fp)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <MemoryMeasureProcess name='MemoryMeasureProcess-2' parent=10349 initial>
file = <_io.BytesIO object at 0x1c3a92450>, protocol = None
def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
> ForkingPickler(file, protocol).dump(obj)
E AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py:60: AttributeError
____________ BenchmarkTest.test_inference_no_configs_only_pretrain _____________
[gw2] darwin -- Python 3.8.2 /Users/sergiov/PycharmProjects/transformers/venv/bin/python
self = <tests.test_benchmark.BenchmarkTest testMethod=test_inference_no_configs_only_pretrain>
def test_inference_no_configs_only_pretrain(self):
MODEL_ID = "sgugger/tiny-distilbert-classification"
benchmark_args = PyTorchBenchmarkArguments(
models=[MODEL_ID],
training=False,
inference=True,
sequence_lengths=[8],
batch_sizes=[1],
multi_process=False,
only_pretrain_model=True,
)
benchmark = PyTorchBenchmark(benchmark_args)
> results = benchmark.run()
tests/test_benchmark.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/benchmark/benchmark_utils.py:707: in run
memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
src/transformers/benchmark/benchmark_utils.py:676: in inference_memory
return separate_process_wrapper_fn(self._inference_memory, self.args.do_multi_processing)(*args, **kwargs)
src/transformers/benchmark/benchmark.py:69: in _inference_memory
return self._measure_memory(_inference)
src/transformers/benchmark/benchmark.py:258: in _measure_memory
memory_bytes = measure_peak_memory_cpu(func)
src/transformers/benchmark/benchmark_utils.py:291: in measure_peak_memory_cpu
mem_process.start()
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py:121: in start
self._popen = self._Popen(self)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:224: in _Popen
return _default_context.get_context().Process._Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:283: in _Popen
return Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:32: in __init__
super().__init__(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py:19: in __init__
self._launch(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:47: in _launch
reduction.dump(process_obj, fp)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <MemoryMeasureProcess name='MemoryMeasureProcess-3' parent=10347 initial>
file = <_io.BytesIO object at 0x1c30a6220>, protocol = None
def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
> ForkingPickler(file, protocol).dump(obj)
E AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py:60: AttributeError
_______________ TFBenchmarkTest.test_inference_no_configs_graph ________________
[gw3] darwin -- Python 3.8.2 /Users/sergiov/PycharmProjects/transformers/venv/bin/python
self = <tests.test_benchmark_tf.TFBenchmarkTest testMethod=test_inference_no_configs_graph>
def test_inference_no_configs_graph(self):
MODEL_ID = "sshleifer/tiny-gpt2"
benchmark_args = TensorFlowBenchmarkArguments(
models=[MODEL_ID],
training=False,
inference=True,
sequence_lengths=[8],
batch_sizes=[1],
multi_process=False,
)
benchmark = TensorFlowBenchmark(benchmark_args)
> results = benchmark.run()
tests/test_benchmark_tf.py:81:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/benchmark/benchmark_utils.py:707: in run
memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
src/transformers/benchmark/benchmark_utils.py:676: in inference_memory
return separate_process_wrapper_fn(self._inference_memory, self.args.do_multi_processing)(*args, **kwargs)
src/transformers/benchmark/benchmark_tf.py:112: in _inference_memory
return self._measure_memory(_inference)
src/transformers/benchmark/benchmark_tf.py:282: in _measure_memory
memory_bytes = measure_peak_memory_cpu(func)
src/transformers/benchmark/benchmark_utils.py:291: in measure_peak_memory_cpu
mem_process.start()
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py:121: in start
self._popen = self._Popen(self)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:224: in _Popen
return _default_context.get_context().Process._Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:283: in _Popen
return Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:32: in __init__
super().__init__(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py:19: in __init__
self._launch(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:47: in _launch
reduction.dump(process_obj, fp)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <MemoryMeasureProcess name='MemoryMeasureProcess-3' parent=10349 initial>
file = <_io.BytesIO object at 0x1c3aaa040>, protocol = None
def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
> ForkingPickler(file, protocol).dump(obj)
E AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py:60: AttributeError
____________ BenchmarkTest.test_inference_no_model_no_architectures ____________
[gw2] darwin -- Python 3.8.2 /Users/sergiov/PycharmProjects/transformers/venv/bin/python
self = <tests.test_benchmark.BenchmarkTest testMethod=test_inference_no_model_no_architectures>
def test_inference_no_model_no_architectures(self):
MODEL_ID = "sshleifer/tiny-gpt2"
config = AutoConfig.from_pretrained(MODEL_ID)
# set architectures equal to `None`
config.architectures = None
benchmark_args = PyTorchBenchmarkArguments(
models=[MODEL_ID],
training=True,
inference=True,
sequence_lengths=[8],
batch_sizes=[1],
multi_process=False,
)
benchmark = PyTorchBenchmark(benchmark_args, configs=[config])
> results = benchmark.run()
tests/test_benchmark.py:114:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/benchmark/benchmark_utils.py:707: in run
memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
src/transformers/benchmark/benchmark_utils.py:676: in inference_memory
return separate_process_wrapper_fn(self._inference_memory, self.args.do_multi_processing)(*args, **kwargs)
src/transformers/benchmark/benchmark.py:69: in _inference_memory
return self._measure_memory(_inference)
src/transformers/benchmark/benchmark.py:258: in _measure_memory
memory_bytes = measure_peak_memory_cpu(func)
src/transformers/benchmark/benchmark_utils.py:291: in measure_peak_memory_cpu
mem_process.start()
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py:121: in start
self._popen = self._Popen(self)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:224: in _Popen
return _default_context.get_context().Process._Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:283: in _Popen
return Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:32: in __init__
super().__init__(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py:19: in __init__
self._launch(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:47: in _launch
reduction.dump(process_obj, fp)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <MemoryMeasureProcess name='MemoryMeasureProcess-4' parent=10347 initial>
file = <_io.BytesIO object at 0x1c32df090>, protocol = None
def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
> ForkingPickler(file, protocol).dump(obj)
E AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py:60: AttributeError
___________ TFBenchmarkTest.test_inference_no_configs_only_pretrain ____________
[gw3] darwin -- Python 3.8.2 /Users/sergiov/PycharmProjects/transformers/venv/bin/python
self = <tests.test_benchmark_tf.TFBenchmarkTest testMethod=test_inference_no_configs_only_pretrain>
def test_inference_no_configs_only_pretrain(self):
MODEL_ID = "sgugger/tiny-distilbert-classification"
benchmark_args = TensorFlowBenchmarkArguments(
models=[MODEL_ID],
training=False,
inference=True,
sequence_lengths=[8],
batch_sizes=[1],
multi_process=False,
only_pretrain_model=True,
)
benchmark = TensorFlowBenchmark(benchmark_args)
> results = benchmark.run()
tests/test_benchmark_tf.py:66:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/benchmark/benchmark_utils.py:707: in run
memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
src/transformers/benchmark/benchmark_utils.py:676: in inference_memory
return separate_process_wrapper_fn(self._inference_memory, self.args.do_multi_processing)(*args, **kwargs)
src/transformers/benchmark/benchmark_tf.py:112: in _inference_memory
return self._measure_memory(_inference)
src/transformers/benchmark/benchmark_tf.py:282: in _measure_memory
memory_bytes = measure_peak_memory_cpu(func)
src/transformers/benchmark/benchmark_utils.py:291: in measure_peak_memory_cpu
mem_process.start()
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py:121: in start
self._popen = self._Popen(self)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:224: in _Popen
return _default_context.get_context().Process._Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:283: in _Popen
return Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:32: in __init__
super().__init__(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py:19: in __init__
self._launch(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:47: in _launch
reduction.dump(process_obj, fp)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <MemoryMeasureProcess name='MemoryMeasureProcess-4' parent=10349 initial>
file = <_io.BytesIO object at 0x1c3af8c70>, protocol = None
def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
> ForkingPickler(file, protocol).dump(obj)
E AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py:60: AttributeError
______________ TFBenchmarkTest.test_inference_with_configs_eager _______________
[gw3] darwin -- Python 3.8.2 /Users/sergiov/PycharmProjects/transformers/venv/bin/python
self = <tests.test_benchmark_tf.TFBenchmarkTest testMethod=test_inference_with_configs_eager>
def test_inference_with_configs_eager(self):
MODEL_ID = "sshleifer/tiny-gpt2"
config = AutoConfig.from_pretrained(MODEL_ID)
benchmark_args = TensorFlowBenchmarkArguments(
models=[MODEL_ID],
training=False,
inference=True,
sequence_lengths=[8],
batch_sizes=[1],
eager_mode=True,
multi_process=False,
)
benchmark = TensorFlowBenchmark(benchmark_args, [config])
> results = benchmark.run()
tests/test_benchmark_tf.py:98:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/benchmark/benchmark_utils.py:707: in run
memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
src/transformers/benchmark/benchmark_utils.py:676: in inference_memory
return separate_process_wrapper_fn(self._inference_memory, self.args.do_multi_processing)(*args, **kwargs)
src/transformers/benchmark/benchmark_tf.py:112: in _inference_memory
return self._measure_memory(_inference)
src/transformers/benchmark/benchmark_tf.py:282: in _measure_memory
memory_bytes = measure_peak_memory_cpu(func)
src/transformers/benchmark/benchmark_utils.py:291: in measure_peak_memory_cpu
mem_process.start()
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py:121: in start
self._popen = self._Popen(self)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:224: in _Popen
return _default_context.get_context().Process._Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:283: in _Popen
return Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:32: in __init__
super().__init__(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py:19: in __init__
self._launch(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:47: in _launch
reduction.dump(process_obj, fp)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <MemoryMeasureProcess name='MemoryMeasureProcess-5' parent=10349 initial>
file = <_io.BytesIO object at 0x1c39cca90>, protocol = None
def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
> ForkingPickler(file, protocol).dump(obj)
E AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py:60: AttributeError
___________________ BenchmarkTest.test_inference_torchscript ___________________
[gw2] darwin -- Python 3.8.2 /Users/sergiov/PycharmProjects/transformers/venv/bin/python
self = <tests.test_benchmark.BenchmarkTest testMethod=test_inference_torchscript>
def test_inference_torchscript(self):
MODEL_ID = "sshleifer/tiny-gpt2"
benchmark_args = PyTorchBenchmarkArguments(
models=[MODEL_ID],
training=False,
inference=True,
torchscript=True,
sequence_lengths=[8],
batch_sizes=[1],
multi_process=False,
)
benchmark = PyTorchBenchmark(benchmark_args)
> results = benchmark.run()
tests/test_benchmark.py:79:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/benchmark/benchmark_utils.py:707: in run
memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
src/transformers/benchmark/benchmark_utils.py:676: in inference_memory
return separate_process_wrapper_fn(self._inference_memory, self.args.do_multi_processing)(*args, **kwargs)
src/transformers/benchmark/benchmark.py:69: in _inference_memory
return self._measure_memory(_inference)
src/transformers/benchmark/benchmark.py:258: in _measure_memory
memory_bytes = measure_peak_memory_cpu(func)
src/transformers/benchmark/benchmark_utils.py:291: in measure_peak_memory_cpu
mem_process.start()
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py:121: in start
self._popen = self._Popen(self)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:224: in _Popen
return _default_context.get_context().Process._Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:283: in _Popen
return Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:32: in __init__
super().__init__(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py:19: in __init__
self._launch(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:47: in _launch
reduction.dump(process_obj, fp)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <MemoryMeasureProcess name='MemoryMeasureProcess-5' parent=10347 initial>
file = <_io.BytesIO object at 0x1c34189a0>, protocol = None
def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
> ForkingPickler(file, protocol).dump(obj)
E AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py:60: AttributeError
______________ TFBenchmarkTest.test_inference_with_configs_graph _______________
[gw3] darwin -- Python 3.8.2 /Users/sergiov/PycharmProjects/transformers/venv/bin/python
self = <tests.test_benchmark_tf.TFBenchmarkTest testMethod=test_inference_with_configs_graph>
def test_inference_with_configs_graph(self):
MODEL_ID = "sshleifer/tiny-gpt2"
config = AutoConfig.from_pretrained(MODEL_ID)
benchmark_args = TensorFlowBenchmarkArguments(
models=[MODEL_ID],
training=False,
inference=True,
sequence_lengths=[8],
batch_sizes=[1],
multi_process=False,
)
benchmark = TensorFlowBenchmark(benchmark_args, [config])
> results = benchmark.run()
tests/test_benchmark_tf.py:114:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/benchmark/benchmark_utils.py:707: in run
memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
src/transformers/benchmark/benchmark_utils.py:676: in inference_memory
return separate_process_wrapper_fn(self._inference_memory, self.args.do_multi_processing)(*args, **kwargs)
src/transformers/benchmark/benchmark_tf.py:112: in _inference_memory
return self._measure_memory(_inference)
src/transformers/benchmark/benchmark_tf.py:282: in _measure_memory
memory_bytes = measure_peak_memory_cpu(func)
src/transformers/benchmark/benchmark_utils.py:291: in measure_peak_memory_cpu
mem_process.start()
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py:121: in start
self._popen = self._Popen(self)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:224: in _Popen
return _default_context.get_context().Process._Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:283: in _Popen
return Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:32: in __init__
super().__init__(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py:19: in __init__
self._launch(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:47: in _launch
reduction.dump(process_obj, fp)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <MemoryMeasureProcess name='MemoryMeasureProcess-6' parent=10349 initial>
file = <_io.BytesIO object at 0x1c38ba860>, protocol = None
def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
> ForkingPickler(file, protocol).dump(obj)
E AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py:60: AttributeError
__________________ BenchmarkTest.test_inference_with_configs ___________________
[gw2] darwin -- Python 3.8.2 /Users/sergiov/PycharmProjects/transformers/venv/bin/python
self = <tests.test_benchmark.BenchmarkTest testMethod=test_inference_with_configs>
def test_inference_with_configs(self):
MODEL_ID = "sshleifer/tiny-gpt2"
config = AutoConfig.from_pretrained(MODEL_ID)
benchmark_args = PyTorchBenchmarkArguments(
models=[MODEL_ID],
training=False,
inference=True,
sequence_lengths=[8],
batch_sizes=[1],
multi_process=False,
)
benchmark = PyTorchBenchmark(benchmark_args, configs=[config])
> results = benchmark.run()
tests/test_benchmark.py:162:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/benchmark/benchmark_utils.py:707: in run
memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
src/transformers/benchmark/benchmark_utils.py:676: in inference_memory
return separate_process_wrapper_fn(self._inference_memory, self.args.do_multi_processing)(*args, **kwargs)
src/transformers/benchmark/benchmark.py:69: in _inference_memory
return self._measure_memory(_inference)
src/transformers/benchmark/benchmark.py:258: in _measure_memory
memory_bytes = measure_peak_memory_cpu(func)
src/transformers/benchmark/benchmark_utils.py:291: in measure_peak_memory_cpu
mem_process.start()
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py:121: in start
self._popen = self._Popen(self)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:224: in _Popen
return _default_context.get_context().Process._Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:283: in _Popen
return Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:32: in __init__
super().__init__(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py:19: in __init__
self._launch(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:47: in _launch
reduction.dump(process_obj, fp)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <MemoryMeasureProcess name='MemoryMeasureProcess-6' parent=10347 initial>
file = <_io.BytesIO object at 0x1c2f605e0>, protocol = None
def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
> ForkingPickler(file, protocol).dump(obj)
E AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py:60: AttributeError
_____________________ TFBenchmarkTest.test_save_csv_files ______________________
[gw3] darwin -- Python 3.8.2 /Users/sergiov/PycharmProjects/transformers/venv/bin/python
self = <tests.test_benchmark_tf.TFBenchmarkTest testMethod=test_save_csv_files>
def test_save_csv_files(self):
MODEL_ID = "sshleifer/tiny-gpt2"
with tempfile.TemporaryDirectory() as tmp_dir:
benchmark_args = TensorFlowBenchmarkArguments(
models=[MODEL_ID],
inference=True,
save_to_csv=True,
sequence_lengths=[8],
batch_sizes=[1],
inference_time_csv_file=os.path.join(tmp_dir, "inf_time.csv"),
inference_memory_csv_file=os.path.join(tmp_dir, "inf_mem.csv"),
env_info_csv_file=os.path.join(tmp_dir, "env.csv"),
multi_process=False,
)
benchmark = TensorFlowBenchmark(benchmark_args)
> benchmark.run()
tests/test_benchmark_tf.py:197:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/benchmark/benchmark_utils.py:707: in run
memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
src/transformers/benchmark/benchmark_utils.py:676: in inference_memory
return separate_process_wrapper_fn(self._inference_memory, self.args.do_multi_processing)(*args, **kwargs)
src/transformers/benchmark/benchmark_tf.py:112: in _inference_memory
return self._measure_memory(_inference)
src/transformers/benchmark/benchmark_tf.py:282: in _measure_memory
memory_bytes = measure_peak_memory_cpu(func)
src/transformers/benchmark/benchmark_utils.py:291: in measure_peak_memory_cpu
mem_process.start()
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py:121: in start
self._popen = self._Popen(self)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:224: in _Popen
return _default_context.get_context().Process._Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:283: in _Popen
return Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:32: in __init__
super().__init__(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py:19: in __init__
self._launch(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:47: in _launch
reduction.dump(process_obj, fp)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <MemoryMeasureProcess name='MemoryMeasureProcess-7' parent=10349 initial>
file = <_io.BytesIO object at 0x1bb6e7220>, protocol = None
def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
> ForkingPickler(file, protocol).dump(obj)
E AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py:60: AttributeError
______________________ BenchmarkTest.test_save_csv_files _______________________
[gw2] darwin -- Python 3.8.2 /Users/sergiov/PycharmProjects/transformers/venv/bin/python
self = <tests.test_benchmark.BenchmarkTest testMethod=test_save_csv_files>
def test_save_csv_files(self):
MODEL_ID = "sshleifer/tiny-gpt2"
with tempfile.TemporaryDirectory() as tmp_dir:
benchmark_args = PyTorchBenchmarkArguments(
models=[MODEL_ID],
training=True,
inference=True,
save_to_csv=True,
sequence_lengths=[8],
batch_sizes=[1],
inference_time_csv_file=os.path.join(tmp_dir, "inf_time.csv"),
train_memory_csv_file=os.path.join(tmp_dir, "train_mem.csv"),
inference_memory_csv_file=os.path.join(tmp_dir, "inf_mem.csv"),
train_time_csv_file=os.path.join(tmp_dir, "train_time.csv"),
env_info_csv_file=os.path.join(tmp_dir, "env.csv"),
multi_process=False,
)
benchmark = PyTorchBenchmark(benchmark_args)
> benchmark.run()
tests/test_benchmark.py:232:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/benchmark/benchmark_utils.py:707: in run
memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
src/transformers/benchmark/benchmark_utils.py:676: in inference_memory
return separate_process_wrapper_fn(self._inference_memory, self.args.do_multi_processing)(*args, **kwargs)
src/transformers/benchmark/benchmark.py:69: in _inference_memory
return self._measure_memory(_inference)
src/transformers/benchmark/benchmark.py:258: in _measure_memory
memory_bytes = measure_peak_memory_cpu(func)
src/transformers/benchmark/benchmark_utils.py:291: in measure_peak_memory_cpu
mem_process.start()
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py:121: in start
self._popen = self._Popen(self)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:224: in _Popen
return _default_context.get_context().Process._Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:283: in _Popen
return Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:32: in __init__
super().__init__(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py:19: in __init__
self._launch(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:47: in _launch
reduction.dump(process_obj, fp)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <MemoryMeasureProcess name='MemoryMeasureProcess-7' parent=10347 initial>
file = <_io.BytesIO object at 0x1c30a64f0>, protocol = None
def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
> ForkingPickler(file, protocol).dump(obj)
E AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py:60: AttributeError
_______________________ BenchmarkTest.test_trace_memory ________________________
[gw2] darwin -- Python 3.8.2 /Users/sergiov/PycharmProjects/transformers/venv/bin/python
self = <tests.test_benchmark.BenchmarkTest testMethod=test_trace_memory>
def test_trace_memory(self):
MODEL_ID = "sshleifer/tiny-gpt2"
def _check_summary_is_not_empty(summary):
self.assertTrue(hasattr(summary, "sequential"))
self.assertTrue(hasattr(summary, "cumulative"))
self.assertTrue(hasattr(summary, "current"))
self.assertTrue(hasattr(summary, "total"))
with tempfile.TemporaryDirectory() as tmp_dir:
benchmark_args = PyTorchBenchmarkArguments(
models=[MODEL_ID],
training=True,
inference=True,
sequence_lengths=[8],
batch_sizes=[1],
log_filename=os.path.join(tmp_dir, "log.txt"),
log_print=True,
trace_memory_line_by_line=True,
multi_process=False,
)
benchmark = PyTorchBenchmark(benchmark_args)
> result = benchmark.run()
tests/test_benchmark.py:261:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/benchmark/benchmark_utils.py:707: in run
memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
src/transformers/benchmark/benchmark_utils.py:676: in inference_memory
return separate_process_wrapper_fn(self._inference_memory, self.args.do_multi_processing)(*args, **kwargs)
src/transformers/benchmark/benchmark.py:69: in _inference_memory
return self._measure_memory(_inference)
src/transformers/benchmark/benchmark.py:258: in _measure_memory
memory_bytes = measure_peak_memory_cpu(func)
src/transformers/benchmark/benchmark_utils.py:291: in measure_peak_memory_cpu
mem_process.start()
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py:121: in start
self._popen = self._Popen(self)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:224: in _Popen
return _default_context.get_context().Process._Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:283: in _Popen
return Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:32: in __init__
super().__init__(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py:19: in __init__
self._launch(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:47: in _launch
reduction.dump(process_obj, fp)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <MemoryMeasureProcess name='MemoryMeasureProcess-8' parent=10347 initial>
file = <_io.BytesIO object at 0x1c32298b0>, protocol = None
def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
> ForkingPickler(file, protocol).dump(obj)
E AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py:60: AttributeError
____________________ TFBenchmarkTest.test_train_no_configs _____________________
[gw3] darwin -- Python 3.8.2 /Users/sergiov/PycharmProjects/transformers/venv/bin/python
self = <tests.test_benchmark_tf.TFBenchmarkTest testMethod=test_train_no_configs>
def test_train_no_configs(self):
MODEL_ID = "sshleifer/tiny-gpt2"
benchmark_args = TensorFlowBenchmarkArguments(
models=[MODEL_ID],
training=True,
inference=False,
sequence_lengths=[8],
batch_sizes=[1],
multi_process=False,
)
benchmark = TensorFlowBenchmark(benchmark_args)
> results = benchmark.run()
tests/test_benchmark_tf.py:129:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/benchmark/benchmark_utils.py:715: in run
memory, train_summary = self.train_memory(model_name, batch_size, sequence_length)
src/transformers/benchmark/benchmark_utils.py:679: in train_memory
return separate_process_wrapper_fn(self._train_memory, self.args.do_multi_processing)(*args, **kwargs)
src/transformers/benchmark/benchmark_tf.py:123: in _train_memory
return self._measure_memory(_train)
src/transformers/benchmark/benchmark_tf.py:282: in _measure_memory
memory_bytes = measure_peak_memory_cpu(func)
src/transformers/benchmark/benchmark_utils.py:291: in measure_peak_memory_cpu
mem_process.start()
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py:121: in start
self._popen = self._Popen(self)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:224: in _Popen
return _default_context.get_context().Process._Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:283: in _Popen
return Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:32: in __init__
super().__init__(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py:19: in __init__
self._launch(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:47: in _launch
reduction.dump(process_obj, fp)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <MemoryMeasureProcess name='MemoryMeasureProcess-8' parent=10349 initial>
file = <_io.BytesIO object at 0x1c39278b0>, protocol = None
def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
> ForkingPickler(file, protocol).dump(obj)
E AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py:60: AttributeError
____________ BenchmarkTest.test_train_encoder_decoder_with_configs _____________
[gw2] darwin -- Python 3.8.2 /Users/sergiov/PycharmProjects/transformers/venv/bin/python
self = <tests.test_benchmark.BenchmarkTest testMethod=test_train_encoder_decoder_with_configs>
def test_train_encoder_decoder_with_configs(self):
MODEL_ID = "sshleifer/tinier_bart"
config = AutoConfig.from_pretrained(MODEL_ID)
benchmark_args = PyTorchBenchmarkArguments(
models=[MODEL_ID],
training=True,
inference=True,
sequence_lengths=[8],
batch_sizes=[1],
multi_process=False,
)
benchmark = PyTorchBenchmark(benchmark_args, configs=[config])
> results = benchmark.run()
tests/test_benchmark.py:210:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/benchmark/benchmark_utils.py:707: in run
memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
src/transformers/benchmark/benchmark_utils.py:676: in inference_memory
return separate_process_wrapper_fn(self._inference_memory, self.args.do_multi_processing)(*args, **kwargs)
src/transformers/benchmark/benchmark.py:69: in _inference_memory
return self._measure_memory(_inference)
src/transformers/benchmark/benchmark.py:258: in _measure_memory
memory_bytes = measure_peak_memory_cpu(func)
src/transformers/benchmark/benchmark_utils.py:291: in measure_peak_memory_cpu
mem_process.start()
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py:121: in start
self._popen = self._Popen(self)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:224: in _Popen
return _default_context.get_context().Process._Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:283: in _Popen
return Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:32: in __init__
super().__init__(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py:19: in __init__
self._launch(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:47: in _launch
reduction.dump(process_obj, fp)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <MemoryMeasureProcess name='MemoryMeasureProcess-9' parent=10347 initial>
file = <_io.BytesIO object at 0x1c3745360>, protocol = None
def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
> ForkingPickler(file, protocol).dump(obj)
E AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py:60: AttributeError
___________________ TFBenchmarkTest.test_train_with_configs ____________________
[gw3] darwin -- Python 3.8.2 /Users/sergiov/PycharmProjects/transformers/venv/bin/python
self = <tests.test_benchmark_tf.TFBenchmarkTest testMethod=test_train_with_configs>
def test_train_with_configs(self):
MODEL_ID = "sshleifer/tiny-gpt2"
config = AutoConfig.from_pretrained(MODEL_ID)
benchmark_args = TensorFlowBenchmarkArguments(
models=[MODEL_ID],
training=True,
inference=False,
sequence_lengths=[8],
batch_sizes=[1],
multi_process=False,
)
benchmark = TensorFlowBenchmark(benchmark_args, [config])
> results = benchmark.run()
tests/test_benchmark_tf.py:145:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/benchmark/benchmark_utils.py:715: in run
memory, train_summary = self.train_memory(model_name, batch_size, sequence_length)
src/transformers/benchmark/benchmark_utils.py:679: in train_memory
return separate_process_wrapper_fn(self._train_memory, self.args.do_multi_processing)(*args, **kwargs)
src/transformers/benchmark/benchmark_tf.py:123: in _train_memory
return self._measure_memory(_train)
src/transformers/benchmark/benchmark_tf.py:282: in _measure_memory
memory_bytes = measure_peak_memory_cpu(func)
src/transformers/benchmark/benchmark_utils.py:291: in measure_peak_memory_cpu
mem_process.start()
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py:121: in start
self._popen = self._Popen(self)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:224: in _Popen
return _default_context.get_context().Process._Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:283: in _Popen
return Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:32: in __init__
super().__init__(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py:19: in __init__
self._launch(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:47: in _launch
reduction.dump(process_obj, fp)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <MemoryMeasureProcess name='MemoryMeasureProcess-9' parent=10349 initial>
file = <_io.BytesIO object at 0x1c3c35ae0>, protocol = None
def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
> ForkingPickler(file, protocol).dump(obj)
E AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py:60: AttributeError
_____________________ BenchmarkTest.test_train_no_configs ______________________
[gw2] darwin -- Python 3.8.2 /Users/sergiov/PycharmProjects/transformers/venv/bin/python
self = <tests.test_benchmark.BenchmarkTest testMethod=test_train_no_configs>
def test_train_no_configs(self):
MODEL_ID = "sshleifer/tiny-gpt2"
benchmark_args = PyTorchBenchmarkArguments(
models=[MODEL_ID],
training=True,
inference=False,
sequence_lengths=[8],
batch_sizes=[1],
multi_process=False,
)
benchmark = PyTorchBenchmark(benchmark_args)
> results = benchmark.run()
tests/test_benchmark.py:129:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/benchmark/benchmark_utils.py:715: in run
memory, train_summary = self.train_memory(model_name, batch_size, sequence_length)
src/transformers/benchmark/benchmark_utils.py:679: in train_memory
return separate_process_wrapper_fn(self._train_memory, self.args.do_multi_processing)(*args, **kwargs)
src/transformers/benchmark/benchmark.py:79: in _train_memory
return self._measure_memory(_train)
src/transformers/benchmark/benchmark.py:258: in _measure_memory
memory_bytes = measure_peak_memory_cpu(func)
src/transformers/benchmark/benchmark_utils.py:291: in measure_peak_memory_cpu
mem_process.start()
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py:121: in start
self._popen = self._Popen(self)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:224: in _Popen
return _default_context.get_context().Process._Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:283: in _Popen
return Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:32: in __init__
super().__init__(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py:19: in __init__
self._launch(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:47: in _launch
reduction.dump(process_obj, fp)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <MemoryMeasureProcess name='MemoryMeasureProcess-10' parent=10347 initial>
file = <_io.BytesIO object at 0x1c379a180>, protocol = None
def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
> ForkingPickler(file, protocol).dump(obj)
E AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py:60: AttributeError
____________________ BenchmarkTest.test_train_with_configs _____________________
[gw2] darwin -- Python 3.8.2 /Users/sergiov/PycharmProjects/transformers/venv/bin/python
self = <tests.test_benchmark.BenchmarkTest testMethod=test_train_with_configs>
def test_train_with_configs(self):
MODEL_ID = "sshleifer/tiny-gpt2"
config = AutoConfig.from_pretrained(MODEL_ID)
benchmark_args = PyTorchBenchmarkArguments(
models=[MODEL_ID],
training=True,
inference=False,
sequence_lengths=[8],
batch_sizes=[1],
multi_process=False,
)
benchmark = PyTorchBenchmark(benchmark_args, configs=[config])
> results = benchmark.run()
tests/test_benchmark.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/benchmark/benchmark_utils.py:715: in run
memory, train_summary = self.train_memory(model_name, batch_size, sequence_length)
src/transformers/benchmark/benchmark_utils.py:679: in train_memory
return separate_process_wrapper_fn(self._train_memory, self.args.do_multi_processing)(*args, **kwargs)
src/transformers/benchmark/benchmark.py:79: in _train_memory
return self._measure_memory(_train)
src/transformers/benchmark/benchmark.py:258: in _measure_memory
memory_bytes = measure_peak_memory_cpu(func)
src/transformers/benchmark/benchmark_utils.py:291: in measure_peak_memory_cpu
mem_process.start()
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py:121: in start
self._popen = self._Popen(self)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:224: in _Popen
return _default_context.get_context().Process._Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:283: in _Popen
return Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:32: in __init__
super().__init__(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py:19: in __init__
self._launch(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:47: in _launch
reduction.dump(process_obj, fp)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <MemoryMeasureProcess name='MemoryMeasureProcess-11' parent=10347 initial>
file = <_io.BytesIO object at 0x1cb8ae2c0>, protocol = None
def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
> ForkingPickler(file, protocol).dump(obj)
E AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py:60: AttributeError
=============================== warnings summary ===============================
venv/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py:22
venv/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py:22
venv/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py:22
venv/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py:22
venv/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py:22
venv/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py:22
venv/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py:22
venv/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py:22
venv/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py:22
/Users/sergiov/PycharmProjects/transformers/venv/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py:22: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
tests/test_feature_extraction_wav2vec2.py: 7 warnings
tests/test_feature_extraction_speech_to_text.py: 7 warnings
/Users/sergiov/PycharmProjects/transformers/venv/lib/python3.8/site-packages/numpy/core/_asarray.py:83: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
return array(a, dtype, copy=False, order=order)
tests/test_generation_beam_search.py::BeamSearchTest::test_beam_scorer_update
tests/test_modeling_bart.py::BartHeadTests::test_generate_beam_search
tests/test_modeling_bert.py::BertModelTest::test_beam_sample_generate
tests/test_modeling_bert_generation.py::BertGenerationEncoderTest::test_beam_sample_generate
tests/test_modeling_big_bird.py::BigBirdModelTest::test_auto_padding
tests/test_modeling_blenderbot.py::BlenderbotModelTest::test_beam_sample_generate
tests/test_modeling_ctrl.py::CTRLModelTest::test_attention_outputs
/Users/sergiov/PycharmProjects/transformers/venv/lib/python3.8/site-packages/torch/_tensor.py:575: UserWarning: floor_divide is deprecated, and will be removed in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values.
To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). (Triggered internally at ../aten/src/ATen/native/BinaryOps.cpp:467.)
return torch.floor_divide(self, other)
tests/test_generation_flax_logits_process.py: 36 warnings
tests/test_modeling_flax_bart.py: 46 warnings
tests/test_modeling_flax_big_bird.py: 5 warnings
/Users/sergiov/PycharmProjects/transformers/venv/lib/python3.8/site-packages/jax/_src/ops/scatter.py:382: DeprecationWarning: index_update is deprecated. Use x.at[idx].set(y) instead.
warnings.warn("index_update is deprecated. Use x.at[idx].set(y) instead.",
tests/test_generation_stopping_criteria.py::StoppingCriteriaTestCase::test_max_new_tokens_criteria
/Users/sergiov/PycharmProjects/transformers/src/transformers/generation_stopping_criteria.py:74: FutureWarning: The class `MaxNewTokensCriteria` is deprecated. Please use `MaxLengthCriteria(max_length=10)` with `max_length = start_length + max_new_tokens` instead.
warnings.warn(
tests/test_model_card.py::ModelCardTester::test_model_card_common_properties
tests/test_model_card.py::ModelCardTester::test_model_card_from_and_save_pretrained
tests/test_model_card.py::ModelCardTester::test_model_card_to_json_file
tests/test_model_card.py::ModelCardTester::test_model_card_to_json_string
/Users/sergiov/PycharmProjects/transformers/src/transformers/modelcard.py:92: FutureWarning: The class `ModelCard` is deprecated and will be removed in version 5 of Transformers
warnings.warn(
tests/test_model_card.py::ModelCardTester::test_model_card_from_and_save_pretrained
/Users/sergiov/PycharmProjects/transformers/src/transformers/models/auto/configuration_auto.py:343: FutureWarning: ALL_PRETRAINED_CONFIG_ARCHIVE_MAP is deprecated and will be removed in v5 of Transformers. It does not contain all available model checkpoints, far from it. Checkout hf.co/models for that.
warnings.warn(
tests/test_modeling_auto.py::AutoModelTest::test_from_identifier_from_model_type
tests/test_modeling_auto.py::AutoModelTest::test_from_pretrained_identifier
/Users/sergiov/PycharmProjects/transformers/src/transformers/models/auto/modeling_auto.py:684: FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models.
warnings.warn(
tests/test_modeling_auto.py::AutoModelTest::test_from_pretrained_dynamic_model
/Users/sergiov/PycharmProjects/transformers/src/transformers/models/auto/auto_factory.py:407: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
logger.warn(
tests/test_modeling_bart.py: 4 warnings
tests/test_modeling_bert.py: 2 warnings
tests/test_modeling_bert_generation.py: 2 warnings
tests/test_modeling_bigbird_pegasus.py: 4 warnings
tests/test_modeling_blenderbot_small.py: 4 warnings
tests/test_modeling_blenderbot.py: 4 warnings
tests/test_modeling_ctrl.py: 2 warnings
/Users/sergiov/PycharmProjects/transformers/src/transformers/generation_utils.py:2036: UserWarning: `max_length` is deprecated in this function, use `stopping_criteria=StoppingCriteriaList(MaxLengthCriteria(max_length=max_length))` instead.
warnings.warn(
tests/test_modeling_bart.py: 6 warnings
tests/test_modeling_bert.py: 3 warnings
tests/test_modeling_bert_generation.py: 3 warnings
tests/test_modeling_bigbird_pegasus.py: 6 warnings
tests/test_modeling_blenderbot_small.py: 6 warnings
tests/test_modeling_blenderbot.py: 6 warnings
tests/test_modeling_ctrl.py: 3 warnings
/Users/sergiov/PycharmProjects/transformers/src/transformers/generation_utils.py:1731: UserWarning: `max_length` is deprecated in this function, use `stopping_criteria=StoppingCriteriaList(MaxLengthCriteria(max_length=max_length))` instead.
warnings.warn(
tests/test_benchmark.py::BenchmarkTest::test_inference_torchscript
/Users/sergiov/PycharmProjects/transformers/src/transformers/models/gpt2/modeling_gpt2.py:196: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
attn_weights = attn_weights / (float(value.size(-1)) ** 0.5)
tests/test_modeling_bart.py: 6 warnings
tests/test_modeling_bert_generation.py: 3 warnings
tests/test_modeling_bert.py: 3 warnings
tests/test_modeling_bigbird_pegasus.py: 6 warnings
tests/test_modeling_blenderbot_small.py: 6 warnings
tests/test_modeling_blenderbot.py: 6 warnings
tests/test_modeling_ctrl.py: 3 warnings
/Users/sergiov/PycharmProjects/transformers/src/transformers/generation_utils.py:1242: UserWarning: `max_length` is deprecated in this function, use `stopping_criteria=StoppingCriteriaList(MaxLengthCriteria(max_length=max_length))` instead.
warnings.warn(
tests/test_modeling_bart.py: 4 warnings
tests/test_modeling_bert_generation.py: 2 warnings
tests/test_modeling_bert.py: 2 warnings
tests/test_modeling_bigbird_pegasus.py: 4 warnings
tests/test_modeling_blenderbot_small.py: 4 warnings
tests/test_modeling_blenderbot.py: 4 warnings
tests/test_modeling_ctrl.py: 2 warnings
/Users/sergiov/PycharmProjects/transformers/src/transformers/generation_beam_search.py:196: UserWarning: Passing `max_length` to BeamSearchScorer is deprecated and has no effect. `max_length` should be passed directly to `beam_search(...)`, `beam_sample(...)`, or `group_beam_search(...)`.
warnings.warn(
tests/test_modeling_bart.py: 4 warnings
tests/test_modeling_bert_generation.py: 2 warnings
tests/test_modeling_bert.py: 2 warnings
tests/test_modeling_bigbird_pegasus.py: 4 warnings
tests/test_modeling_blenderbot_small.py: 4 warnings
tests/test_modeling_blenderbot.py: 4 warnings
tests/test_modeling_ctrl.py: 2 warnings
/Users/sergiov/PycharmProjects/transformers/src/transformers/generation_utils.py:2331: UserWarning: `max_length` is deprecated in this function, use `stopping_criteria=StoppingCriteriaList(MaxLengthCriteria(max_length=max_length))` instead.
warnings.warn(
tests/test_modeling_bert_generation.py: 2 warnings
tests/test_modeling_bart.py: 4 warnings
tests/test_modeling_bert.py: 2 warnings
tests/test_modeling_blenderbot_small.py: 4 warnings
tests/test_modeling_bigbird_pegasus.py: 4 warnings
tests/test_modeling_ctrl.py: 2 warnings
tests/test_modeling_blenderbot.py: 4 warnings
/Users/sergiov/PycharmProjects/transformers/src/transformers/generation_utils.py:1479: UserWarning: `max_length` is deprecated in this function, use `stopping_criteria=StoppingCriteriaList(MaxLengthCriteria(max_length=max_length))` instead.
warnings.warn(
tests/test_modeling_albert.py: 3 warnings
tests/test_modeling_bert.py: 3 warnings
tests/test_modeling_distilbert.py: 3 warnings
tests/test_modeling_electra.py: 3 warnings
/Users/sergiov/PycharmProjects/transformers/venv/lib/python3.8/site-packages/torch/fx/symbolic_trace.py:434: UserWarning: Was not able to add assertion to guarantee correct inputs to specialized function. It is up to the user to make sure that your inputs match the inputs you specialized the function with.
torch.warnings.warn(
tests/test_modeling_common.py::ModelUtilsTest::test_model_from_pretrained_with_different_pretrained_model_name
/Users/sergiov/PycharmProjects/transformers/src/transformers/configuration_utils.py:495: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
logger.warn(
tests/test_modeling_flaubert.py::FlaubertModelTest::test_flaubert_lm_head
tests/test_modeling_flaubert.py::FlaubertModelTest::test_model_outputs_equivalence
tests/test_modeling_flaubert.py::FlaubertModelTest::test_training
/Users/sergiov/PycharmProjects/transformers/venv/lib/python3.8/site-packages/torch/nn/_reduction.py:13: UserWarning: reduction='elementwise_mean' is deprecated, please use reduction='mean' instead.
warnings.warn("reduction='elementwise_mean' is deprecated, please use reduction='mean' instead.")
-- Docs: https://docs.pytest.org/en/stable/warnings.html
=========================== short test summary info ============================
FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_encoder_decoder_with_configs
FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_encoder_decoder_with_configs
FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_no_configs - At...
FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_no_configs_eager
FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_no_configs_only_pretrain
FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_no_configs_graph
FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_no_model_no_architectures
FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_no_configs_only_pretrain
FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_with_configs_eager
FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_torchscript - A...
FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_with_configs_graph
FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_with_configs - ...
FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_save_csv_files - Att...
FAILED tests/test_benchmark.py::BenchmarkTest::test_save_csv_files - Attribut...
FAILED tests/test_benchmark.py::BenchmarkTest::test_trace_memory - AttributeE...
FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_train_no_configs - A...
FAILED tests/test_benchmark.py::BenchmarkTest::test_train_encoder_decoder_with_configs
FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_train_with_configs
FAILED tests/test_benchmark.py::BenchmarkTest::test_train_no_configs - Attrib...
FAILED tests/test_benchmark.py::BenchmarkTest::test_train_with_configs - Attr...
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! KeyboardInterrupt !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/threading.py:306: KeyboardInterrupt
(to show a full traceback on KeyboardInterrupt use --full-trace)
==== 20 failed, 1520 passed, 335 skipped, 297 warnings in 214.63s (0:03:34) ====
```
## Expected behavior
I expect all tests to pass
| 10-22-2021 16:50:31 | 10-22-2021 16:50:31 | This seems similar to the second part of this other [issue 13227](https://github.com/huggingface/transformers/issues/13227).<|||||>Hi, I'm testing this locally. I also get some errors in `make test` - I'll ask the team about that. However, you can also check individual modules e.g. with `pytest test_benchmark_tf.py`. When I do this, the file passes. Can you confirm that the same thing happens for you?<|||||>Hi, thanks for looking into this!
No, I still have errors when run locally:
```
% pytest tests/test_benchmark.py
==================================================================================================================== test session starts =====================================================================================================================
platform darwin -- Python 3.8.2, pytest-6.2.5, py-1.10.0, pluggy-1.0.0
rootdir: /Users/sergiov/PycharmProjects/transformers, configfile: setup.cfg
plugins: xdist-2.4.0, timeout-2.0.1, dash-2.0.0, forked-1.3.0
collected 13 items
tests/test_benchmark.py FsFFFFFFFFFsF [100%]
========================================================================================================================== FAILURES ==========================================================================================================================
_________________________________________________________________________________________________ BenchmarkTest.test_inference_encoder_decoder_with_configs __________________________________________________________________________________________________
self = <tests.test_benchmark.BenchmarkTest testMethod=test_inference_encoder_decoder_with_configs>
def test_inference_encoder_decoder_with_configs(self):
MODEL_ID = "sshleifer/tinier_bart"
config = AutoConfig.from_pretrained(MODEL_ID)
benchmark_args = PyTorchBenchmarkArguments(
models=[MODEL_ID],
training=False,
inference=True,
sequence_lengths=[8],
batch_sizes=[1],
multi_process=False,
)
benchmark = PyTorchBenchmark(benchmark_args, configs=[config])
> results = benchmark.run()
tests/test_benchmark.py:178:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/benchmark/benchmark_utils.py:707: in run
memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
src/transformers/benchmark/benchmark_utils.py:676: in inference_memory
return separate_process_wrapper_fn(self._inference_memory, self.args.do_multi_processing)(*args, **kwargs)
src/transformers/benchmark/benchmark.py:69: in _inference_memory
return self._measure_memory(_inference)
src/transformers/benchmark/benchmark.py:258: in _measure_memory
memory_bytes = measure_peak_memory_cpu(func)
src/transformers/benchmark/benchmark_utils.py:291: in measure_peak_memory_cpu
mem_process.start()
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py:121: in start
self._popen = self._Popen(self)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:224: in _Popen
return _default_context.get_context().Process._Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:283: in _Popen
return Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:32: in __init__
super().__init__(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py:19: in __init__
self._launch(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:47: in _launch
reduction.dump(process_obj, fp)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <MemoryMeasureProcess name='MemoryMeasureProcess-1' parent=1591 initial>, file = <_io.BytesIO object at 0x19b6ef680>, protocol = None
def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
> ForkingPickler(file, protocol).dump(obj)
E AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py:60: AttributeError
-------------------------------------------------------------------------------------------------------------------- Captured stdout call --------------------------------------------------------------------------------------------------------------------
1 / 1
__________________________________________________________________________________________________________ BenchmarkTest.test_inference_no_configs ___________________________________________________________________________________________________________
self = <tests.test_benchmark.BenchmarkTest testMethod=test_inference_no_configs>
def test_inference_no_configs(self):
MODEL_ID = "sshleifer/tiny-gpt2"
benchmark_args = PyTorchBenchmarkArguments(
models=[MODEL_ID],
training=False,
inference=True,
sequence_lengths=[8],
batch_sizes=[1],
multi_process=False,
)
benchmark = PyTorchBenchmark(benchmark_args)
> results = benchmark.run()
tests/test_benchmark.py:47:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/benchmark/benchmark_utils.py:707: in run
memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
src/transformers/benchmark/benchmark_utils.py:676: in inference_memory
return separate_process_wrapper_fn(self._inference_memory, self.args.do_multi_processing)(*args, **kwargs)
src/transformers/benchmark/benchmark.py:69: in _inference_memory
return self._measure_memory(_inference)
src/transformers/benchmark/benchmark.py:258: in _measure_memory
memory_bytes = measure_peak_memory_cpu(func)
src/transformers/benchmark/benchmark_utils.py:291: in measure_peak_memory_cpu
mem_process.start()
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py:121: in start
self._popen = self._Popen(self)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:224: in _Popen
return _default_context.get_context().Process._Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:283: in _Popen
return Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:32: in __init__
super().__init__(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py:19: in __init__
self._launch(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:47: in _launch
reduction.dump(process_obj, fp)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <MemoryMeasureProcess name='MemoryMeasureProcess-2' parent=1591 initial>, file = <_io.BytesIO object at 0x1a3957310>, protocol = None
def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
> ForkingPickler(file, protocol).dump(obj)
E AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py:60: AttributeError
-------------------------------------------------------------------------------------------------------------------- Captured stdout call --------------------------------------------------------------------------------------------------------------------
1 / 1
___________________________________________________________________________________________________ BenchmarkTest.test_inference_no_configs_only_pretrain ____________________________________________________________________________________________________
self = <tests.test_benchmark.BenchmarkTest testMethod=test_inference_no_configs_only_pretrain>
def test_inference_no_configs_only_pretrain(self):
MODEL_ID = "sgugger/tiny-distilbert-classification"
benchmark_args = PyTorchBenchmarkArguments(
models=[MODEL_ID],
training=False,
inference=True,
sequence_lengths=[8],
batch_sizes=[1],
multi_process=False,
only_pretrain_model=True,
)
benchmark = PyTorchBenchmark(benchmark_args)
> results = benchmark.run()
tests/test_benchmark.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/benchmark/benchmark_utils.py:707: in run
memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
src/transformers/benchmark/benchmark_utils.py:676: in inference_memory
return separate_process_wrapper_fn(self._inference_memory, self.args.do_multi_processing)(*args, **kwargs)
src/transformers/benchmark/benchmark.py:69: in _inference_memory
return self._measure_memory(_inference)
src/transformers/benchmark/benchmark.py:258: in _measure_memory
memory_bytes = measure_peak_memory_cpu(func)
src/transformers/benchmark/benchmark_utils.py:291: in measure_peak_memory_cpu
mem_process.start()
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py:121: in start
self._popen = self._Popen(self)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:224: in _Popen
return _default_context.get_context().Process._Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:283: in _Popen
return Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:32: in __init__
super().__init__(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py:19: in __init__
self._launch(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:47: in _launch
reduction.dump(process_obj, fp)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <MemoryMeasureProcess name='MemoryMeasureProcess-3' parent=1591 initial>, file = <_io.BytesIO object at 0x1a38e63b0>, protocol = None
def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
> ForkingPickler(file, protocol).dump(obj)
E AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py:60: AttributeError
-------------------------------------------------------------------------------------------------------------------- Captured stdout call --------------------------------------------------------------------------------------------------------------------
1 / 1
___________________________________________________________________________________________________ BenchmarkTest.test_inference_no_model_no_architectures ___________________________________________________________________________________________________
self = <tests.test_benchmark.BenchmarkTest testMethod=test_inference_no_model_no_architectures>
def test_inference_no_model_no_architectures(self):
MODEL_ID = "sshleifer/tiny-gpt2"
config = AutoConfig.from_pretrained(MODEL_ID)
# set architectures equal to `None`
config.architectures = None
benchmark_args = PyTorchBenchmarkArguments(
models=[MODEL_ID],
training=True,
inference=True,
sequence_lengths=[8],
batch_sizes=[1],
multi_process=False,
)
benchmark = PyTorchBenchmark(benchmark_args, configs=[config])
> results = benchmark.run()
tests/test_benchmark.py:114:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/benchmark/benchmark_utils.py:707: in run
memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
src/transformers/benchmark/benchmark_utils.py:676: in inference_memory
return separate_process_wrapper_fn(self._inference_memory, self.args.do_multi_processing)(*args, **kwargs)
src/transformers/benchmark/benchmark.py:69: in _inference_memory
return self._measure_memory(_inference)
src/transformers/benchmark/benchmark.py:258: in _measure_memory
memory_bytes = measure_peak_memory_cpu(func)
src/transformers/benchmark/benchmark_utils.py:291: in measure_peak_memory_cpu
mem_process.start()
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py:121: in start
self._popen = self._Popen(self)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:224: in _Popen
return _default_context.get_context().Process._Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:283: in _Popen
return Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:32: in __init__
super().__init__(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py:19: in __init__
self._launch(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:47: in _launch
reduction.dump(process_obj, fp)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <MemoryMeasureProcess name='MemoryMeasureProcess-4' parent=1591 initial>, file = <_io.BytesIO object at 0x19b673a40>, protocol = None
def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
> ForkingPickler(file, protocol).dump(obj)
E AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py:60: AttributeError
-------------------------------------------------------------------------------------------------------------------- Captured stdout call --------------------------------------------------------------------------------------------------------------------
1 / 1
__________________________________________________________________________________________________________ BenchmarkTest.test_inference_torchscript __________________________________________________________________________________________________________
self = <tests.test_benchmark.BenchmarkTest testMethod=test_inference_torchscript>
def test_inference_torchscript(self):
MODEL_ID = "sshleifer/tiny-gpt2"
benchmark_args = PyTorchBenchmarkArguments(
models=[MODEL_ID],
training=False,
inference=True,
torchscript=True,
sequence_lengths=[8],
batch_sizes=[1],
multi_process=False,
)
benchmark = PyTorchBenchmark(benchmark_args)
> results = benchmark.run()
tests/test_benchmark.py:79:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/benchmark/benchmark_utils.py:707: in run
memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
src/transformers/benchmark/benchmark_utils.py:676: in inference_memory
return separate_process_wrapper_fn(self._inference_memory, self.args.do_multi_processing)(*args, **kwargs)
src/transformers/benchmark/benchmark.py:69: in _inference_memory
return self._measure_memory(_inference)
src/transformers/benchmark/benchmark.py:258: in _measure_memory
memory_bytes = measure_peak_memory_cpu(func)
src/transformers/benchmark/benchmark_utils.py:291: in measure_peak_memory_cpu
mem_process.start()
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py:121: in start
self._popen = self._Popen(self)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:224: in _Popen
return _default_context.get_context().Process._Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:283: in _Popen
return Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:32: in __init__
super().__init__(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py:19: in __init__
self._launch(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:47: in _launch
reduction.dump(process_obj, fp)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <MemoryMeasureProcess name='MemoryMeasureProcess-5' parent=1591 initial>, file = <_io.BytesIO object at 0x1a390e9f0>, protocol = None
def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
> ForkingPickler(file, protocol).dump(obj)
E AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py:60: AttributeError
-------------------------------------------------------------------------------------------------------------------- Captured stdout call --------------------------------------------------------------------------------------------------------------------
1 / 1
_________________________________________________________________________________________________________ BenchmarkTest.test_inference_with_configs __________________________________________________________________________________________________________
self = <tests.test_benchmark.BenchmarkTest testMethod=test_inference_with_configs>
def test_inference_with_configs(self):
MODEL_ID = "sshleifer/tiny-gpt2"
config = AutoConfig.from_pretrained(MODEL_ID)
benchmark_args = PyTorchBenchmarkArguments(
models=[MODEL_ID],
training=False,
inference=True,
sequence_lengths=[8],
batch_sizes=[1],
multi_process=False,
)
benchmark = PyTorchBenchmark(benchmark_args, configs=[config])
> results = benchmark.run()
tests/test_benchmark.py:162:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/benchmark/benchmark_utils.py:707: in run
memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
src/transformers/benchmark/benchmark_utils.py:676: in inference_memory
return separate_process_wrapper_fn(self._inference_memory, self.args.do_multi_processing)(*args, **kwargs)
src/transformers/benchmark/benchmark.py:69: in _inference_memory
return self._measure_memory(_inference)
src/transformers/benchmark/benchmark.py:258: in _measure_memory
memory_bytes = measure_peak_memory_cpu(func)
src/transformers/benchmark/benchmark_utils.py:291: in measure_peak_memory_cpu
mem_process.start()
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py:121: in start
self._popen = self._Popen(self)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:224: in _Popen
return _default_context.get_context().Process._Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:283: in _Popen
return Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:32: in __init__
super().__init__(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py:19: in __init__
self._launch(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:47: in _launch
reduction.dump(process_obj, fp)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <MemoryMeasureProcess name='MemoryMeasureProcess-6' parent=1591 initial>, file = <_io.BytesIO object at 0x1a391bd60>, protocol = None
def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
> ForkingPickler(file, protocol).dump(obj)
E AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py:60: AttributeError
-------------------------------------------------------------------------------------------------------------------- Captured stdout call --------------------------------------------------------------------------------------------------------------------
1 / 1
_____________________________________________________________________________________________________________ BenchmarkTest.test_save_csv_files ______________________________________________________________________________________________________________
self = <tests.test_benchmark.BenchmarkTest testMethod=test_save_csv_files>
def test_save_csv_files(self):
MODEL_ID = "sshleifer/tiny-gpt2"
with tempfile.TemporaryDirectory() as tmp_dir:
benchmark_args = PyTorchBenchmarkArguments(
models=[MODEL_ID],
training=True,
inference=True,
save_to_csv=True,
sequence_lengths=[8],
batch_sizes=[1],
inference_time_csv_file=os.path.join(tmp_dir, "inf_time.csv"),
train_memory_csv_file=os.path.join(tmp_dir, "train_mem.csv"),
inference_memory_csv_file=os.path.join(tmp_dir, "inf_mem.csv"),
train_time_csv_file=os.path.join(tmp_dir, "train_time.csv"),
env_info_csv_file=os.path.join(tmp_dir, "env.csv"),
multi_process=False,
)
benchmark = PyTorchBenchmark(benchmark_args)
> benchmark.run()
tests/test_benchmark.py:232:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/benchmark/benchmark_utils.py:707: in run
memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
src/transformers/benchmark/benchmark_utils.py:676: in inference_memory
return separate_process_wrapper_fn(self._inference_memory, self.args.do_multi_processing)(*args, **kwargs)
src/transformers/benchmark/benchmark.py:69: in _inference_memory
return self._measure_memory(_inference)
src/transformers/benchmark/benchmark.py:258: in _measure_memory
memory_bytes = measure_peak_memory_cpu(func)
src/transformers/benchmark/benchmark_utils.py:291: in measure_peak_memory_cpu
mem_process.start()
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py:121: in start
self._popen = self._Popen(self)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:224: in _Popen
return _default_context.get_context().Process._Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:283: in _Popen
return Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:32: in __init__
super().__init__(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py:19: in __init__
self._launch(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:47: in _launch
reduction.dump(process_obj, fp)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <MemoryMeasureProcess name='MemoryMeasureProcess-7' parent=1591 initial>, file = <_io.BytesIO object at 0x1a39b9180>, protocol = None
def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
> ForkingPickler(file, protocol).dump(obj)
E AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py:60: AttributeError
-------------------------------------------------------------------------------------------------------------------- Captured stdout call --------------------------------------------------------------------------------------------------------------------
1 / 1
______________________________________________________________________________________________________________ BenchmarkTest.test_trace_memory _______________________________________________________________________________________________________________
self = <tests.test_benchmark.BenchmarkTest testMethod=test_trace_memory>
def test_trace_memory(self):
MODEL_ID = "sshleifer/tiny-gpt2"
def _check_summary_is_not_empty(summary):
self.assertTrue(hasattr(summary, "sequential"))
self.assertTrue(hasattr(summary, "cumulative"))
self.assertTrue(hasattr(summary, "current"))
self.assertTrue(hasattr(summary, "total"))
with tempfile.TemporaryDirectory() as tmp_dir:
benchmark_args = PyTorchBenchmarkArguments(
models=[MODEL_ID],
training=True,
inference=True,
sequence_lengths=[8],
batch_sizes=[1],
log_filename=os.path.join(tmp_dir, "log.txt"),
log_print=True,
trace_memory_line_by_line=True,
multi_process=False,
)
benchmark = PyTorchBenchmark(benchmark_args)
> result = benchmark.run()
tests/test_benchmark.py:261:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/benchmark/benchmark_utils.py:707: in run
memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
src/transformers/benchmark/benchmark_utils.py:676: in inference_memory
return separate_process_wrapper_fn(self._inference_memory, self.args.do_multi_processing)(*args, **kwargs)
src/transformers/benchmark/benchmark.py:69: in _inference_memory
return self._measure_memory(_inference)
src/transformers/benchmark/benchmark.py:258: in _measure_memory
memory_bytes = measure_peak_memory_cpu(func)
src/transformers/benchmark/benchmark_utils.py:291: in measure_peak_memory_cpu
mem_process.start()
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py:121: in start
self._popen = self._Popen(self)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:224: in _Popen
return _default_context.get_context().Process._Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:283: in _Popen
return Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:32: in __init__
super().__init__(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py:19: in __init__
self._launch(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:47: in _launch
reduction.dump(process_obj, fp)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <MemoryMeasureProcess name='MemoryMeasureProcess-8' parent=1591 initial>, file = <_io.BytesIO object at 0x1a3a338b0>, protocol = None
def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
> ForkingPickler(file, protocol).dump(obj)
E AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py:60: AttributeError
-------------------------------------------------------------------------------------------------------------------- Captured stdout call --------------------------------------------------------------------------------------------------------------------
1 / 1
-------------------------------------------------------------------------------------------------------------------- Captured stderr call --------------------------------------------------------------------------------------------------------------------
py3nvml not installed, we won't log GPU memory usage. Install py3nvml (pip install py3nvml) to use GPU memory tracing.
___________________________________________________________________________________________________ BenchmarkTest.test_train_encoder_decoder_with_configs ____________________________________________________________________________________________________
self = <tests.test_benchmark.BenchmarkTest testMethod=test_train_encoder_decoder_with_configs>
def test_train_encoder_decoder_with_configs(self):
MODEL_ID = "sshleifer/tinier_bart"
config = AutoConfig.from_pretrained(MODEL_ID)
benchmark_args = PyTorchBenchmarkArguments(
models=[MODEL_ID],
training=True,
inference=True,
sequence_lengths=[8],
batch_sizes=[1],
multi_process=False,
)
benchmark = PyTorchBenchmark(benchmark_args, configs=[config])
> results = benchmark.run()
tests/test_benchmark.py:210:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/benchmark/benchmark_utils.py:707: in run
memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
src/transformers/benchmark/benchmark_utils.py:676: in inference_memory
return separate_process_wrapper_fn(self._inference_memory, self.args.do_multi_processing)(*args, **kwargs)
src/transformers/benchmark/benchmark.py:69: in _inference_memory
return self._measure_memory(_inference)
src/transformers/benchmark/benchmark.py:258: in _measure_memory
memory_bytes = measure_peak_memory_cpu(func)
src/transformers/benchmark/benchmark_utils.py:291: in measure_peak_memory_cpu
mem_process.start()
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py:121: in start
self._popen = self._Popen(self)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:224: in _Popen
return _default_context.get_context().Process._Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:283: in _Popen
return Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:32: in __init__
super().__init__(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py:19: in __init__
self._launch(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:47: in _launch
reduction.dump(process_obj, fp)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <MemoryMeasureProcess name='MemoryMeasureProcess-9' parent=1591 initial>, file = <_io.BytesIO object at 0x1a3dce630>, protocol = None
def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
> ForkingPickler(file, protocol).dump(obj)
E AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py:60: AttributeError
-------------------------------------------------------------------------------------------------------------------- Captured stdout call --------------------------------------------------------------------------------------------------------------------
1 / 1
____________________________________________________________________________________________________________ BenchmarkTest.test_train_no_configs _____________________________________________________________________________________________________________
self = <tests.test_benchmark.BenchmarkTest testMethod=test_train_no_configs>
def test_train_no_configs(self):
MODEL_ID = "sshleifer/tiny-gpt2"
benchmark_args = PyTorchBenchmarkArguments(
models=[MODEL_ID],
training=True,
inference=False,
sequence_lengths=[8],
batch_sizes=[1],
multi_process=False,
)
benchmark = PyTorchBenchmark(benchmark_args)
> results = benchmark.run()
tests/test_benchmark.py:129:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/benchmark/benchmark_utils.py:715: in run
memory, train_summary = self.train_memory(model_name, batch_size, sequence_length)
src/transformers/benchmark/benchmark_utils.py:679: in train_memory
return separate_process_wrapper_fn(self._train_memory, self.args.do_multi_processing)(*args, **kwargs)
src/transformers/benchmark/benchmark.py:79: in _train_memory
return self._measure_memory(_train)
src/transformers/benchmark/benchmark.py:258: in _measure_memory
memory_bytes = measure_peak_memory_cpu(func)
src/transformers/benchmark/benchmark_utils.py:291: in measure_peak_memory_cpu
mem_process.start()
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py:121: in start
self._popen = self._Popen(self)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:224: in _Popen
return _default_context.get_context().Process._Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:283: in _Popen
return Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:32: in __init__
super().__init__(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py:19: in __init__
self._launch(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:47: in _launch
reduction.dump(process_obj, fp)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <MemoryMeasureProcess name='MemoryMeasureProcess-10' parent=1591 initial>, file = <_io.BytesIO object at 0x1a3e422c0>, protocol = None
def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
> ForkingPickler(file, protocol).dump(obj)
E AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py:60: AttributeError
-------------------------------------------------------------------------------------------------------------------- Captured stdout call --------------------------------------------------------------------------------------------------------------------
1 / 1
___________________________________________________________________________________________________________ BenchmarkTest.test_train_with_configs ____________________________________________________________________________________________________________
self = <tests.test_benchmark.BenchmarkTest testMethod=test_train_with_configs>
def test_train_with_configs(self):
MODEL_ID = "sshleifer/tiny-gpt2"
config = AutoConfig.from_pretrained(MODEL_ID)
benchmark_args = PyTorchBenchmarkArguments(
models=[MODEL_ID],
training=True,
inference=False,
sequence_lengths=[8],
batch_sizes=[1],
multi_process=False,
)
benchmark = PyTorchBenchmark(benchmark_args, configs=[config])
> results = benchmark.run()
tests/test_benchmark.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/benchmark/benchmark_utils.py:715: in run
memory, train_summary = self.train_memory(model_name, batch_size, sequence_length)
src/transformers/benchmark/benchmark_utils.py:679: in train_memory
return separate_process_wrapper_fn(self._train_memory, self.args.do_multi_processing)(*args, **kwargs)
src/transformers/benchmark/benchmark.py:79: in _train_memory
return self._measure_memory(_train)
src/transformers/benchmark/benchmark.py:258: in _measure_memory
memory_bytes = measure_peak_memory_cpu(func)
src/transformers/benchmark/benchmark_utils.py:291: in measure_peak_memory_cpu
mem_process.start()
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py:121: in start
self._popen = self._Popen(self)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:224: in _Popen
return _default_context.get_context().Process._Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py:283: in _Popen
return Popen(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:32: in __init__
super().__init__(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py:19: in __init__
self._launch(process_obj)
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py:47: in _launch
reduction.dump(process_obj, fp)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <MemoryMeasureProcess name='MemoryMeasureProcess-11' parent=1591 initial>, file = <_io.BytesIO object at 0x1a3f06270>, protocol = None
def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
> ForkingPickler(file, protocol).dump(obj)
E AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py:60: AttributeError
-------------------------------------------------------------------------------------------------------------------- Captured stdout call --------------------------------------------------------------------------------------------------------------------
1 / 1
====================================================================================================================== warnings summary ======================================================================================================================
venv/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py:22
/Users/sergiov/PycharmProjects/transformers/venv/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py:22: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
tests/test_benchmark.py::BenchmarkTest::test_inference_torchscript
/Users/sergiov/PycharmProjects/transformers/src/transformers/models/gpt2/modeling_gpt2.py:196: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
attn_weights = attn_weights / (float(value.size(-1)) ** 0.5)
-- Docs: https://docs.pytest.org/en/stable/warnings.html
================================================================================================================== short test summary info ===================================================================================================================
FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_encoder_decoder_with_configs - AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_no_configs - AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_no_configs_only_pretrain - AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_no_model_no_architectures - AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_torchscript - AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_with_configs - AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
FAILED tests/test_benchmark.py::BenchmarkTest::test_save_csv_files - AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
FAILED tests/test_benchmark.py::BenchmarkTest::test_trace_memory - AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
FAILED tests/test_benchmark.py::BenchmarkTest::test_train_encoder_decoder_with_configs - AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
FAILED tests/test_benchmark.py::BenchmarkTest::test_train_no_configs - AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
FAILED tests/test_benchmark.py::BenchmarkTest::test_train_with_configs - AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'
========================================================================================================= 11 failed, 2 skipped, 2 warnings in 11.71s =========================================================================================================
<|||||>I'm seeing a similar error when following the [Benchmark documentation ](https://huggingface.co/transformers/benchmarks.html) while using Python 3.9.6.
Test script executing documentation example:
```
import sys
import torch
import transformers
from transformers import PyTorchBenchmark, PyTorchBenchmarkArguments, BertConfig
print("Transformers Version: ", transformers.__version__)
print("PyTorch Version: ", torch.__version__)
print("Python Version: ", sys.version)
args = PyTorchBenchmarkArguments(models=["bert-base", "bert-384-hid", "bert-6-lay"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512])
config_base = BertConfig()
config_384_hid = BertConfig(hidden_size=384)
config_6_lay = BertConfig(num_hidden_layers=6)
benchmark = PyTorchBenchmark(args, configs=[config_base, config_384_hid, config_6_lay])
benchmark.run()
```
Output:
```
python test_benchmark.py
Transformers Version: 4.12.3
PyTorch Version: 1.10.0
Python Version: 3.9.6 (default, Sep 8 2021, 11:57:49)
[Clang 12.0.5 (clang-1205.0.22.11)]
1 / 3
Traceback (most recent call last):
File "/Users/daubner/dev/email-zoning/test_benchmark.py", line 15, in <module>
benchmark.run()
File "/Users/daubner/.venv/benchmark/lib/python3.9/site-packages/transformers/benchmark/benchmark_utils.py", line 707, in run
memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
File "/Users/daubner/.venv/benchmark/lib/python3.9/site-packages/transformers/benchmark/benchmark_utils.py", line 676, in inference_memory
return separate_process_wrapper_fn(self._inference_memory, self.args.do_multi_processing)(*args, **kwargs)
File "/Users/daubner/.venv/benchmark/lib/python3.9/site-packages/transformers/benchmark/benchmark_utils.py", line 101, in multi_process_func
p.start()
File "/Users/daubner/.pyenv/versions/3.9.6/lib/python3.9/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/Users/daubner/.pyenv/versions/3.9.6/lib/python3.9/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/Users/daubner/.pyenv/versions/3.9.6/lib/python3.9/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/Users/daubner/.pyenv/versions/3.9.6/lib/python3.9/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/Users/daubner/.pyenv/versions/3.9.6/lib/python3.9/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/Users/daubner/.pyenv/versions/3.9.6/lib/python3.9/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/Users/daubner/.pyenv/versions/3.9.6/lib/python3.9/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'separate_process_wrapper_fn.<locals>.multi_process_func.<locals>.wrapper_func'
```
Appears to be caused by a difference in the `multiprocessing` library from Python 3.7 to 3.9. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey, bumping this to keep it alive - it's something we're definitely aware of but it's not a priority right now, given that tests seem to work in the CI. If anyone can investigate it and submit a PR that would be great, if not we'll get to it as soon as we can!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I found time to get around to this but could not reproduce - `pytest test_benchmark.py` and `pytest test_tf_benchmark.py` work fine for me on Python 3.9.7 with the current master version of Transformers. Possibly the issue was resolved by a patch to transformers or Python in the meantime? |
transformers | 14,122 | closed | Rename variables with unclear naming | # What does this PR do?
This is a followup PR of #14085.
Edited: I use `remove_prefix_from_model` instead of `remove_prefix_from_init_model` to match other variables' naming convention.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger @patrickvonplaten
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-22-2021 13:15:20 | 10-22-2021 13:15:20 | |
transformers | 14,121 | closed | [TPU tests] Enable first TPU examples pytorch | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
PR enables TPU tests for PyTorch/XLA. At the moment only the `run_glue.py` example is tested in the daily scheduled tests. We should add more examples to the test in the following weeks.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-22-2021 12:09:20 | 10-22-2021 12:09:20 | |
transformers | 14,119 | closed | [wav2vec2] Add missing --validation_split_percentage data arg | # What does this PR do?
This PR adds the missing `--validation_split_percentage` arg in wav2vec2 examples.
Default value comes from [`run_wav2vec2_pretraining_no_trainer.py`](https://github.com/huggingface/transformers/blob/70f186f61ebff4ad87fd7cc0fc1e0e0660b485ea/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py#L95).
@stas00, @patrickvonplaten | 10-22-2021 11:11:21 | 10-22-2021 11:11:21 | |
transformers | 14,118 | closed | Adding `handle_long_generation` paramters for `text-generation` pipeline. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #14033
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 10-22-2021 11:06:17 | 10-22-2021 11:06:17 | |
transformers | 14,117 | closed | Replace assertions with valueError Exeptions | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger Please review
| 10-22-2021 10:04:54 | 10-22-2021 10:04:54 | Contributes toward fixing #12789 <|||||>Hey @sgugger, can you please add hactoberfest-accepted tag also?<|||||>According to the rules, it's not necessary when the PR has been approved and merged:
> A pull request is considered approved once it has an overall approving review from maintainers, or has been merged by maintainers, or has been given the 'hacktoberfest-accepted' label.* |
transformers | 14,116 | closed | [tests] fix hubert test sort | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-22-2021 08:41:47 | 10-22-2021 08:41:47 | |
transformers | 14,115 | closed | Add LayoutXLMProcessor (and LayoutXLMTokenizer, LayoutXLMTokenizerFast) | # What does this PR do?
This PR implements `LayoutXLMProcessor`, which can be used to prepare all data for [LayoutXLM](https://huggingface.co/transformers/master/model_doc/layoutxlm.html). LayoutXLM is a multilingual version of LayoutLMv2. It uses the same vocabulary as XLMRoBERTa.
Big thanks to @kingyiusuen for setting up a first draft. This PR is built on his work: #14030
- [x] To do: it might make sense to make a new "layoutxlm" folder in the models directory, where the following files can be added:
* `tokenization_layoutxlm.py`
* `tokenization_layoutxlm_fast.py`
* `processor_layoutxlm.py` | 10-22-2021 07:55:51 | 10-22-2021 07:55:51 | have you merged it into master? |
transformers | 14,114 | closed | [BUG] Tokenizer offset | ```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-base')
text = "k , u , o , 105 , 957 , 601 . "
out = tokenizer.encode_plus(text=text,
truncation=True,
max_length=128,
return_tensors='tf',
return_token_type_ids=True,
return_attention_mask=True,
return_offsets_mapping=True,
return_length=True)
print(tokenizer.tokenize(text))
text_offsets = out['offset_mapping'][0].numpy().tolist()
for i in text_offsets:
s,e = i
print(f"{(s,e)}:{text[s:e]}")
```
```
['βk', 'β', ',', 'βu', 'β', ',', 'βo', 'β', ',', 'β105', 'β', ',', 'β9', '57', 'β', ',', 'β', '601', 'β', '.']
(0, 0):
(0, 1):k
(2, 3):,
(2, 3):,
(4, 5):u
(6, 7):,
(6, 7):,
(8, 9):o
(10, 11):,
(10, 11):,
(12, 15):105
(16, 17):,
(16, 17):,
(18, 19):9
(19, 21):57
(22, 23):,
(22, 23):,
(24, 25):6
(24, 27):601
(28, 29):.
(28, 29):.
(0, 0):
```
the offset can't return the Original text
1.the offset of 'β' will be filled by next token . ????
2. (24, 25):6 ???? | 10-22-2021 06:51:06 | 10-22-2021 06:51:06 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,113 | closed | Add a parameter "device " in Tokenizer.__call__() | In method Tokenizer.__call__() Add a parameter "device" to get batchencoding whose tensors are on the specified device | 10-22-2021 05:37:20 | 10-22-2021 05:37:20 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,112 | closed | Update TP parallel GEMM image | I added column parallel and row parallel image

## reviewers
@stas00 | 10-22-2021 02:32:16 | 10-22-2021 02:32:16 | this image is wrong.
don't merge now.
I'll update image and explanation.<|||||><img width="812" alt="parallelism-tp-parallel_gemm" src="https://user-images.githubusercontent.com/38183241/138397848-0d73658d-3fda-4a95-b7f2-1f5e232d4e47.png">
<|||||>I updated image ! Can you merge this? @stas00<|||||>Thank you for making the improvements, @hyunwoongko! |
transformers | 14,111 | closed | Replace asserts with exceptions | Hi! I replaced asserts in the following files related to #12789 issue:
- src/transformers/trainer_pt_utils.py
- tests/test_feature_extraction_detr.py
- tests/test_feature_extraction_clip.py
- tests/test_feature_extraction_common.py
- tests/test_trainer_seq2seq.py
## Before submitting
* [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
* [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
* [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
* [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
* [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger Please review | 10-21-2021 23:43:16 | 10-21-2021 23:43:16 | |
transformers | 14,110 | closed | LayoutLMv2 model not supporting training on more than 1 GPU when using PyTorch Data Parallel | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.11.2
- Platform: Linux-5.4.0-66-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.9.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
Models: LayoutLMv2 @NielsRogge
## Information
Model I am using: LayoutLMv2
The problem arises when using:
- my own modified scripts
The tasks I am working on is:
- token classification FUNSD
## To reproduce
Steps to reproduce the behavior:
1. Run the below script with more than 1 GPU
```python
from datasets import load_dataset
import torch
from torch.nn import DataParallel
from PIL import Image
from transformers import LayoutLMv2Processor
from datasets import Features, Sequence, ClassLabel, Value, Array2D, Array3D
from torch.utils.data import DataLoader
from transformers import LayoutLMv2ForTokenClassification, AdamW
import torch
from tqdm.notebook import tqdm
from datasets import load_metric
use_cuda = torch.cuda.is_available()
device= torch.device('cuda:0' if use_cuda else 'cpu')
print(device)
device_ids = [0,1]
datasets = load_dataset("nielsr/funsd")
labels = datasets['train'].features['ner_tags'].feature.names
print(labels)
id2label = {v: k for v, k in enumerate(labels)}
label2id = {k: v for v, k in enumerate(labels)}
##Next, let's use `LayoutLMv2Processor` to prepare the data for the model.
processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased", revision="no_ocr")
# we need to define custom features
features = Features({
'image': Array3D(dtype="int64", shape=(3, 224, 224)),
'input_ids': Sequence(feature=Value(dtype='int64')),
'attention_mask': Sequence(Value(dtype='int64')),
'token_type_ids': Sequence(Value(dtype='int64')),
'bbox': Array2D(dtype="int64", shape=(512, 4)),
'labels': Sequence(ClassLabel(names=labels)),
})
def preprocess_data(examples):
images = [Image.open(path).convert("RGB") for path in examples['image_path']]
words = examples['words']
boxes = examples['bboxes']
word_labels = examples['ner_tags']
encoded_inputs = processor(images, words, boxes=boxes, word_labels=word_labels,
padding="max_length", truncation=True)
return encoded_inputs
train_dataset = datasets['train'].map(preprocess_data, batched=True, remove_columns=datasets['train'].column_names,
features=features)
test_dataset = datasets['test'].map(preprocess_data, batched=True, remove_columns=datasets['test'].column_names,
features=features)
processor.tokenizer.decode(train_dataset['input_ids'][0])
print(train_dataset['labels'][0])
##Finally, let's set the format to PyTorch, and place everything on the GPU:
train_dataset.set_format(type="torch", device=device)
test_dataset.set_format(type="torch", device=device)
train_dataset.features.keys()
##Next, we create corresponding dataloaders.
train_dataloader = DataLoader(train_dataset, batch_size=4, shuffle=True)
test_dataloader = DataLoader(test_dataset, batch_size=2)
##Let's verify a batch:
batch = next(iter(train_dataloader))
for k,v in batch.items():
print(k, v.shape)
## Train the model
##Here we train the model in native PyTorch. We use the AdamW optimizer.
model = LayoutLMv2ForTokenClassification.from_pretrained('microsoft/layoutlmv2-base-uncased',
num_labels=len(labels))
if use_cuda:
model = DataParallel(model,device_ids=device_ids)
model.to(device)
optimizer = AdamW(model.parameters(), lr=5e-5)
global_step = 0
num_train_epochs = 6
t_total = len(train_dataloader) * num_train_epochs # total number of training steps
#put the model in training mode
model.train()
for epoch in range(num_train_epochs):
print("Epoch:", epoch)
for batch in tqdm(train_dataloader):
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = model(**batch)
loss = outputs.loss
# print loss every 100 steps
if global_step % 100 == 0:
print(f"Loss after {global_step} steps: {loss.item()}")
loss.backward()
optimizer.step()
global_step += 1
## Evaluation
#Next, let's evaluate the model on the test set.
metric = load_metric("seqeval")
# put model in evaluation mode
model.eval()
for batch in tqdm(test_dataloader, desc="Evaluating"):
with torch.no_grad():
input_ids = batch['input_ids'].to(device)
bbox = batch['bbox'].to(device)
image = batch['image'].to(device)
attention_mask = batch['attention_mask'].to(device)
token_type_ids = batch['token_type_ids'].to(device)
labels = batch['labels'].to(device)
# forward pass
outputs = model(input_ids=input_ids, bbox=bbox, image=image, attention_mask=attention_mask,
token_type_ids=token_type_ids, labels=labels)
# predictions
predictions = outputs.logits.argmax(dim=2)
# Remove ignored index (special tokens)
true_predictions = [
[id2label[p.item()] for (p, l) in zip(prediction, label) if l != -100]
for prediction, label in zip(predictions, labels)
]
true_labels = [
[id2label[l.item()] for (p, l) in zip(prediction, label) if l != -100]
for prediction, label in zip(predictions, labels)
]
metric.add_batch(predictions=true_predictions, references=true_labels)
final_score = metric.compute()
print(final_score)
```
##Error
```
Epoch: 0
0%| | 0/38 [00:00<?, ?it/s]
/home/puneetm/anaconda3/lib/python3.8/site-packages/torch/_tensor.py:575: UserWarning: floor_divide is deprecated, and will be removed in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values.
To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). (Triggered internally at ../aten/src/ATen/native/BinaryOps.cpp:467.)
return torch.floor_divide(self, other)
Traceback (most recent call last):
File "llmv2_demo.py", line 111, in <module>
outputs = model(**batch)
File "/home/puneetm/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/puneetm/anaconda3/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 168, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/puneetm/anaconda3/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 178, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/puneetm/anaconda3/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply
output.reraise()
File "/home/puneetm/anaconda3/lib/python3.8/site-packages/torch/_utils.py", line 425, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in replica 1 on device 1.
Original Traceback (most recent call last):
File "/home/puneetm/anaconda3/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/home/puneetm/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/puneetm/anaconda3/lib/python3.8/site-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py", line 1167, in forward
outputs = self.layoutlmv2(
File "/home/puneetm/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/puneetm/anaconda3/lib/python3.8/site-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py", line 898, in forward
visual_emb = self._calc_img_embeddings(
File "/home/puneetm/anaconda3/lib/python3.8/site-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py", line 762, in _calc_img_embeddings
visual_embeddings = self.visual_proj(self.visual(image))
File "/home/puneetm/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/puneetm/anaconda3/lib/python3.8/site-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py", line 590, in forward
features = self.backbone(images_input)
File "/home/puneetm/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/puneetm/anaconda3/lib/python3.8/site-packages/detectron2/modeling/backbone/fpn.py", line 126, in forward
bottom_up_features = self.bottom_up(x)
File "/home/puneetm/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/puneetm/anaconda3/lib/python3.8/site-packages/detectron2/modeling/backbone/resnet.py", line 449, in forward
x = stage(x)
File "/home/puneetm/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/puneetm/anaconda3/lib/python3.8/site-packages/torch/nn/modules/container.py", line 139, in forward
input = module(input)
File "/home/puneetm/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/puneetm/anaconda3/lib/python3.8/site-packages/detectron2/modeling/backbone/resnet.py", line 195, in forward
out = self.conv1(x)
File "/home/puneetm/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/puneetm/anaconda3/lib/python3.8/site-packages/detectron2/layers/wrappers.py", line 84, in forward
x = F.conv2d(
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking arugment for argument weight in method wrapper_cudnn_convolution)
``` | 10-21-2021 23:37:49 | 10-21-2021 23:37:49 | Hi,
I've answered this question [here](https://github.com/NielsRogge/Transformers-Tutorials/issues/30).
TDLR: you need to first call `model.layoutlmv2.visual.synchronize_batch_norm()`.<|||||>Hi @NielsRogge Thanks for your quick response. I looked at that repo as well just a couple of minutes back. The problem that I face using that solution is it gives this error:
```
raise RuntimeError("Make sure torch.distributed is set up properly.")
RuntimeError: Make sure torch.distributed is set up properly.
```
I read the above-linked post. The OP there also faces the same problem and you recommend the following:
```
You probably first need to call torch.distributed.init_process_group() before starting training.
```
Using this in the code forces me to implement DistributedDataParallel instead of the conventional DataParallel. Can you suggest something to help further?
It requires setting up the backend, rank, and world_size for DistributedDataParallel. Is this the way to go? Can you give an example of a running script that handles batch synchronization without forcing with DataParallel?
Currently, I have added the following lines of code in my script:
```
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
torch.distributed.init_process_group("nccl", rank=0, world_size=2)
model=LayoutLMv2_Classification_model().to(device)
model.LayoutLMv2Encoder.visual.synchronize_batch_norm()
```
The terminal hangs and there is no output displayed.
Any help on this case will be highly appreciated!! Thanks once again!<|||||>Are you running all of this in a notebook or as a script? The authors defined everything in a [Python script](https://github.com/microsoft/unilm/blob/master/layoutlmft/examples/run_funsd.py), which they then launch as follows:
```
cd layoutlmft
python -m torch.distributed.launch --nproc_per_node=4 examples/run_funsd.py \
--model_name_or_path microsoft/layoutlmv2-base-uncased \
--output_dir /tmp/test-ner \
--do_train \
--do_predict \
--max_steps 1000 \
--warmup_ratio 0.1 \
--fp16
```
That's the recommended way to train deep learning models with PyTorch on multiple GPUs. `torch.distributed.launch` is a helper utility that can be used to launch multiple processes per node for distributed training.
It would be great if we can add an example script for LayoutLMv2/LayoutXLM to the [examples folder](https://github.com/huggingface/transformers/tree/master/examples) of HuggingFace Transformers. It would mean updating the Python script for it to work with HuggingFace Transformers instead of the original unilm repository.
Are you interested in contributing this? <|||||>Actually, let me mark it as a "good first issue" (this is a good first contribution for people interested in contributing). This way, we can help others fine-tune LayoutLMv2 on multiple GPUs.<|||||>Shall I take this up ?<|||||>@harsha070 would be great! So the goal would be to add an example script that could be called `run_layoutlmv2.py` that uses the HuggingFace Trainer to fine-tune the model on the FUNSD dataset. You can also create a `run_layoutlmv2_no_trainer.py` script that leverages HuggingFace Accelerate instead to run on multiple GPUs.
Do you have a setup with more than 1 GPU?<|||||>Sure. Understood. Yes, I have a multi-GPU setup.<|||||>Awesome! You can take a look at the example run_ner.py script (or other example scripts), they all use the HfArgumentParser to automatically parse the command line arguments into model_args, data_args and training_args.
You can also take a look at my [example notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutLMv2/FUNSD) regarding fine-tuning LayoutLMv2 on the FUNSD dataset. Ideally, we also leverage HuggingFace Datasets, to automatically load the dataset from the hub. I've already uploaded that one a while ago: https://huggingface.co/datasets/nielsr/funsd
Let me know if you need any help!<|||||>Thank you for taking up this much needed suggestion. I've been running the [FUNSD trainer](https://github.com/harsha070/transformers/tree/master/examples/research_projects/unilm/layoutlmv2) with the following parameters:
`CUDA_VISIBLE_DEVICES=0,1,2
torchrun --standalone --nnodes=1 --nproc_per_node=3
run_layoutlmv2.py
--model_name_or_path microsoft/layoutlmv2-base-uncased
--processor_name microsoft/layoutlmv2-base-uncased
--output_dir /tmp/test-layoutlmv2
--dataset_name nielsr/funsd
--do_train
--do_predict
--max_steps 1000
--warmup_ratio 0.1
--fp16
--model_revision no_ocr
--per_device_train_batch_size 2`
I seem to run into a segfault error about 25% into the process. Here's the trace using `CUDA_LAUNCH_BLOCKING=1`.
```Traceback (most recent call last):
File "/layoutlmv2/run_layoutlmv2.py", line 483, in <module>
main()
File "/layoutlmv2/run_layoutlmv2.py", line 414, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/miniconda3/lib/python3.9/site-packages/transformers/trainer.py", line 1316, in train
tr_loss_step = self.training_step(model, inputs)
File "/miniconda3/lib/python3.9/site-packages/transformers/trainer.py", line 1859, in training_step
self.scaler.scale(loss).backward()
File "/miniconda3/lib/python3.9/site-packages/torch/_tensor.py", line 307, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/miniconda3/lib/python3.9/site-packages/torch/autograd/__init__.py", line 154, in backward
Variable._execution_engine.run_backward(
RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED
You can try to repro this exception using the following code snippet. If that doesn't trigger the error, please include your original repro script when reporting this issue [NOTE: This did not trigger the error].
import torch
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = False
torch.backends.cudnn.allow_tf32 = True
data = torch.randn([2, 2048, 7, 7], dtype=torch.half, device='cuda', requires_grad=True)
net = torch.nn.Conv2d(2048, 2048, kernel_size=[1, 1], padding=[0, 0], stride=[1, 1], dilation=[1, 1], groups=1)
net = net.cuda().half()
out = net(data)
out.backward(torch.randn_like(out))
torch.cuda.synchronize()
ConvolutionParams
data_type = CUDNN_DATA_HALF
padding = [0, 0, 0]
stride = [1, 1, 0]
dilation = [1, 1, 0]
groups = 1
deterministic = false
allow_tf32 = true
input: TensorDescriptor 0x7f1ddc0c7520
type = CUDNN_DATA_HALF
nbDims = 4
dimA = 2, 2048, 7, 7,
strideA = 100352, 49, 7, 1,
output: TensorDescriptor 0x7f1ddc032260
type = CUDNN_DATA_HALF
nbDims = 4
dimA = 2, 2048, 7, 7,
strideA = 100352, 49, 7, 1,
weight: FilterDescriptor 0x7f1ddc0c48a0
type = CUDNN_DATA_HALF
tensor_format = CUDNN_TENSOR_NCHW
nbDims = 4
dimA = 2048, 2048, 1, 1,
Pointer addresses:
input: 0x7f1d7bd88000
output: 0x7f1d47190000
weight: 0x7f1d6c000000
Additional pointer addresses:
grad_output: 0x7f1d47190000
grad_weight: 0x7f1d6c000000
Backward filter algorithm: 3
[W CUDAGuardImpl.h:113] Warning: CUDA warning: an illegal memory access was encountered (function destroyEvent)
terminate called after throwing an instance of 'c10::CUDAError'
what(): CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Exception raised from create_event_internal at ../c10/cuda/CUDACachingAllocator.cpp:1211 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x7f1f6a86fd62 in /miniconda3/lib/python3.9/site-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0x1c4d3 (0x7f1f6aad24d3 in /miniconda3/lib/python3.9/site-packages/torch/lib/libc10_cuda.so)
frame #2: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0x1a2 (0x7f1f6aad2ee2 in /miniconda3/lib/python3.9/site-packages/torch/lib/libc10_cuda.so)
frame #3: c10::TensorImpl::release_resources() + 0xa4 (0x7f1f6a859314 in /miniconda3/lib/python3.9/site-packages/torch/lib/libc10.so)
frame #4: std::vector<c10d::Reducer::Bucket, std::allocator<c10d::Reducer::Bucket> >::~vector() + 0x2e9 (0x7f1fb44ded49 in /miniconda3/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so)
frame #5: c10d::Reducer::~Reducer() + 0x24d (0x7f1fb44d118d in /miniconda3/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so)
frame #6: std::_Sp_counted_ptr<c10d::Reducer*, (__gnu_cxx::_Lock_policy)2>::_M_dispose() + 0x12 (0x7f1fc7dfbe82 in /miniconda3/lib/python3.9/site-packages/torch/lib/libtorch_python.so)
frame #7: std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() + 0x46 (0x7f1fc722c696 in /miniconda3/lib/python3.9/site-packages/torch/lib/libtorch_python.so)
frame #8: <unknown function> + 0xe6c26f (0x7f1fc7dfe26f in /miniconda3/lib/python3.9/site-packages/torch/lib/libtorch_python.so)
frame #9: <unknown function> + 0x2a31e9 (0x7f1fc72351e9 in /miniconda3/lib/python3.9/site-packages/torch/lib/libtorch_python.so)
frame #10: <unknown function> + 0x2a44ee (0x7f1fc72364ee in /miniconda3/lib/python3.9/site-packages/torch/lib/libtorch_python.so)
frame #11: <unknown function> + 0x12255b (0x55a41ae8955b in /miniconda3/bin/python)
frame #12: <unknown function> + 0x1a9333 (0x55a41af10333 in /miniconda3/bin/python)
frame #13: <unknown function> + 0x12255b (0x55a41ae8955b in /miniconda3/bin/python)
frame #14: <unknown function> + 0x1a9333 (0x55a41af10333 in /miniconda3/bin/python)
frame #15: <unknown function> + 0x12283c (0x55a41ae8983c in /miniconda3/bin/python)
frame #16: <unknown function> + 0x134eb7 (0x55a41ae9beb7 in /miniconda3/bin/python)
frame #17: <unknown function> + 0x134e1c (0x55a41ae9be1c in /miniconda3/bin/python)
frame #18: <unknown function> + 0x162e08 (0x55a41aec9e08 in /miniconda3/bin/python)
frame #19: PyDict_SetItemString + 0x64 (0x55a41aee20c4 in /miniconda3/bin/python)
frame #20: <unknown function> + 0x26747b (0x55a41afce47b in /miniconda3/bin/python)
frame #21: Py_FinalizeEx + 0x191 (0x55a41afcea51 in /miniconda3/bin/python)
frame #22: Py_RunMain + 0x10c (0x55a41afd314c in /miniconda3/bin/python)
frame #23: Py_BytesMain + 0x39 (0x55a41afd35b9 in /miniconda3/bin/python)
frame #24: __libc_start_main + 0xe7 (0x7f1fd99d0bf7 in /lib/x86_64-linux-gnu/libc.so.6)
frame #25: <unknown function> + 0x1f4a64 (0x55a41af5ba64 in /miniconda3/bin/python)
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 33003 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 33005 closing signal SIGTERM
```
If I run without launch blocking, I get:
```
Traceback (most recent call last):
File "/layoutlmv2/run_layoutlmv2.py", line 483, in <module>
main()
File "/layoutlmv2/run_layoutlmv2.py", line 414, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/miniconda3/lib/python3.9/site-packages/transformers/trainer.py", line 1316, in train
tr_loss_step = self.training_step(model, inputs)
File "/miniconda3/lib/python3.9/site-packages/transformers/trainer.py", line 1847, in training_step
loss = self.compute_loss(model, inputs)
File "/miniconda3/lib/python3.9/site-packages/transformers/trainer.py", line 1881, in compute_loss
outputs = model(**inputs)
File "/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/miniconda3/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 886, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/miniconda3/lib/python3.9/site-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py", line 1197, in forward
active_logits = logits.view(-1, self.num_labels)[active_loss]
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
terminate called after throwing an instance of 'c10::CUDAError'
what(): CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
```
==============================================================
Output of `python -m torch.utils.collect_env`:
```
PyTorch version: 1.10.0+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.18.0
Libc version: glibc-2.27
Python version: 3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-4.15.0-159-generic-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 9.1.85
GPU models and configuration:
GPU 0: GeForce GTX 1080 Ti
GPU 1: GeForce GTX 1080 Ti
GPU 2: GeForce GTX 1080 Ti
GPU 3: GeForce GTX 1080 Ti
GPU 4: GeForce GTX 1080 Ti
Nvidia driver version: 460.91.03
cuDNN version: Probably one of the following:
/usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn.so.7.6.5
/usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn.so.7.6.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.3
[pip3] torch==1.10.0
[pip3] torchvision==0.11.1
[conda] mypy-extensions 0.4.3 pypi_0 pypi
[conda] numpy 1.21.3 pypi_0 pypi
[conda] torch 1.10.0 pypi_0 pypi
[conda] torchvision 0.11.1 pypi_0 pypi
```<|||||>having the same issue with accelerate +1<|||||>With DistributedDataParallel and `model.layoutlmv2.visual.synchronize_batch_norm()`, I'm now seeing:
```
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one.
This error indicates that your module has parameters that were not used in producing loss. You can
enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to
`torch.nn.parallel.DistributedDataParallel`, and by making sure all `forward` function outputs
participate in calculating loss.
If you already have done the above, then the distributed data parallel module wasn't able to locate
the output tensors in the return value of your module's `forward` function. Please include the loss
function and the structure of the return value of `forward` of your module when reporting this issue
(e.g. list, dict, iterable).
Parameters which did not receive grad for rank 1: layoutlmv2.pooler.dense.bias,
layoutlmv2.pooler.dense.weight, layoutlmv2.visual.backbone.fpn_output4.bias,
layoutlmv2.visual.backbone.fpn_output4.weight, layoutlmv2.visual.backbone.fpn_output3.bias,
layoutlmv2.visual.backbone.fpn_output3.weight, layoutlmv2.visual.backbone.fpn_output5.weight,
layoutlmv2.visual.backbone.fpn_output5.bias
```
Did anybody else come across this? I tried setting a dataset-divisible total batch size and `dataloader_drop_last=True` in case it was some kind of batch norm issue - but no luck...
Setup details:
- transformers v4.17, running on SageMaker Distributed Data Parallel
- Trainer-based training, calling `training_args._setup_devices` then `model.layoutlmv2.visual.synchronize_batch_norm()` before setting up the `Trainer`
- Fine-tuning for token classification (tried both AutoModelForTokenClassification and specific LayoutLMv2ForTokenClassification)
- LayoutLMv2Processor is pre-applied in a `dataset.map()` before training
- Works fine in single-GPU / non-distributed setting<|||||>> With DistributedDataParallel and `model.layoutlmv2.visual.synchronize_batch_norm()`, I'm now seeing:
>
> ```
> RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one.
> This error indicates that your module has parameters that were not used in producing loss. You can
> enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to
> `torch.nn.parallel.DistributedDataParallel`, and by making sure all `forward` function outputs
> participate in calculating loss.
>
> If you already have done the above, then the distributed data parallel module wasn't able to locate
> the output tensors in the return value of your module's `forward` function. Please include the loss
> function and the structure of the return value of `forward` of your module when reporting this issue
> (e.g. list, dict, iterable).
>
> Parameters which did not receive grad for rank 1: layoutlmv2.pooler.dense.bias,
> layoutlmv2.pooler.dense.weight, layoutlmv2.visual.backbone.fpn_output4.bias,
> layoutlmv2.visual.backbone.fpn_output4.weight, layoutlmv2.visual.backbone.fpn_output3.bias,
> layoutlmv2.visual.backbone.fpn_output3.weight, layoutlmv2.visual.backbone.fpn_output5.weight,
> layoutlmv2.visual.backbone.fpn_output5.bias
> ```
>
> Did anybody else come across this? I tried setting a dataset-divisible total batch size and `dataloader_drop_last=True` in case it was some kind of batch norm issue - but no luck...
>
> Setup details:
>
> * transformers v4.17, running on SageMaker Distributed Data Parallel
> * Trainer-based training, calling `training_args._setup_devices` then `model.layoutlmv2.visual.synchronize_batch_norm()` before setting up the `Trainer`
> * Fine-tuning for token classification (tried both AutoModelForTokenClassification and specific LayoutLMv2ForTokenClassification)
> * LayoutLMv2Processor is pre-applied in a `dataset.map()` before training
> * Works fine in single-GPU / non-distributed setting
@athewsey were u able to resolve the issue?<|||||>I managed to run it with multiple gpus not with `accelerate` but rather just launching with `torchrun --standalone --nnodes=1 --nproc_per_node=NUM_OF_GPUS` (i.e., one process per GPU on a single node) and Trainer, then encountered RuntimeError.
This is because I believe there's a typo at https://github.com/huggingface/transformers/blob/main/src/transformers/models/layoutlmv2/modeling_layoutlmv2.py#L607 which should be `if not (world_size % node_size == 0)` and not `if not (world_size & node_size == 0)`
(e.g., `4 & 4` is always 1, and raises RuntimeError)
After this one character fix, now working fine.
@NielsRogge Would this be a tiny but good PR for a fix?<|||||>@akkikiki feel free to open a PR! |
transformers | 14,109 | closed | [deepspeed integration] HF Trainer takes over GPUs for DP | on multi-gpu machine when `deepspeed` launcher is used w/o the `--num_gpus` arguments to run HF/DS integration, HF Trainer takes over and does DP over all available GPUs, which defeats the purpose of using deepspeed to fit a larger than GPU model.
Normally when no `--num_gpus` argument is passed, deepspeed automatically uses all gpus, so here we have a conflict for those users who are used to that automatic behavior.
So currently we need to always use something like `deepspeed --num_gpus 2` explicitly which tells HF Trainer not use DP over those GPUs.
The proper solution is to code that when `--deepspeed` is passed HF Trainer won't launch DP.
Thanks to Lintang Sutawika for reporting this. | 10-21-2021 21:24:27 | 10-21-2021 21:24:27 | Odd, I can't reproduce this.
on 2 gpus with `deepspeed --num_gpus 2` or just `deepspeed` I get the same world_size=2 inside Trainer, so it shouldn't make any difference.
So I'm going to close this for now. we can re-open this if someone reproduces the issue. |
transformers | 14,108 | closed | Fix a typo in preprocessing docs | # What does this PR do?
Fixes a simple typo in preprocessing documentation: mode -> model.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-21-2021 20:51:13 | 10-21-2021 20:51:13 | Thanks a lot! |
transformers | 14,107 | closed | GPT-J Models Cannot Load If Tokens Have Been Resized Using resize_token_embeddings Method | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` versions: 4.11.1, 4.11.3
- Platform: Ubuntu 18.04, Windows
- Python version: 3.7.5
### Who can help
@patil-suraj @patrickvonplaten, @LysandreJik @stas00
## Information
Something seems to be wrong in the serialization of GPTJ models. They are not behaving like other models in the library. After loading a GPTJ model using `.from_pretrained`, calling `model.resize_token_embeddings` and saving the model using `.save_pretrained`, it is not possible to load a GPTJ model again using `.from_pretrained`. Is there a workaround that's suggested?
This error is thrown:
```
RuntimeError: Error(s) in loading state_dict for GPTJForCausalLM:
size mismatch for lm_head.weight: copying a param with shape torch.Size([50400, 4096]) from checkpoint, the shape in current model is torch.Size([54001, 4096]).
size mismatch for lm_head.bias: copying a param with shape torch.Size([50400]) from checkpoint, the shape in current model is torch.Size([54001]).
```
## To reproduce
```
from transformers import GPTJForCausalLM
model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B")
model.resize_token_embeddings(54001)
model.save_pretrained("model_dir")
model = GPTJForCausalLM.from_pretrained("model_dir")
```
## Expected behavior
The model should be loaded correctly if the token embeddings have been changed in the standard way.
| 10-21-2021 20:00:31 | 10-21-2021 20:00:31 | @patil-suraj Changing the two methods below in modeling_gptj.py fixes the problem, but it needs to fine-tune longer than other gpt-style models (2 epochs vs. 1), so I'm not sure that this is correct. Please advise.
Methods updated:
```
def get_output_embeddings(self):
return self.lm_head
def set_output_embeddings(self, new_embeddings):
self.lm_head = new_embeddings
```<|||||>@alexorona We also need to set `tie_word_embeddings=False` in `config`, because GPT-J does not share word embeddings with `lm_head`. It was not set by default, so when we resize the embeddings, the weights get tied, maybe this could be the issue. |
transformers | 14,106 | closed | Automatic Mixed Precision Support for (some) Flax Transformers | # π Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
[Mixed precision](https://arxiv.org/abs/1710.03740) can reduce memory usage and improve computation efficiency. Thanks to bfloat16, it is very straightforward on TPU. However, on GPU it requires type casting for certain operations. It will be great if automatic type casting for mixed precision is implemented in (some popular) Flax models so that they can run efficiently also on GPUs.
## Motivation
Ideally, we would want JAX/Flax Transformers to run easily and efficiently on both GPU and TPU. Having mixed precision will help speed up both training and inference by big margins. However, on GPU this requires both loss/gradient scaling and auto casting implemented. Currently auto casting is not supported by JAX or Flax (although gradient scaling is. )
JAX team said AMP should be external to JAX https://github.com/google/jax/issues/5257.
Flax has a milestone, which is of low priority for the moment https://github.com/google/flax/milestone/6.
My intuition, which may be wrong, is that since we are dealing with transformers, there should be a relatively smaller set operations, such as softmax, that needs special treatments compared to the [full set of ops in pytorch](https://pytorch.org/docs/stable/amp.html#autocast-op-reference). | 10-21-2021 19:54:35 | 10-21-2021 19:54:35 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,105 | closed | T5ForConditionalGeneration `prepare_inputs_for_generation` causes problems for `sample` function | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.10.2
- Platform: Linux Manjaro
- Python version: 3.9.7
- PyTorch version (GPU?): torch==1.9.1+cpu
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
The problem is in T5ForConditionalGeneration, so I think @patrickvonplaten, @patil-suraj,
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- T5: @patrickvonplaten, @patil-suraj
Library:
- Text generation: @patrickvonplaten
- Tokenizers: @LysandreJik
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): T5ForConditionalGeneration
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
I'm trying to do a multinomial sampling using a pre-trained T5 model ([razent/SciFive-large-Pubmed_PMC](https://huggingface.co/razent/SciFive-large-Pubmed_PMC)).
## To reproduce
Steps to reproduce the behavior:
1. create a tokenizer and model using T5ForConditionalGeneration class (e.g. [razent/SciFive-large-Pubmed_PMC](https://huggingface.co/razent/SciFive-large-Pubmed_PMC)
2. call the `model.sample(input_ids=input_ids)` with any random input_ids
3. you will encounter the following error: `You have to specify either input_ids or inputs_embeds`
I believe that the problem cause is the following line and the T5ForConditionalGeneration implementation of the function `prepare_inputs_for_generation`:
https://github.com/huggingface/transformers/blob/234cfefbb083d2614a55f6093b0badfb2efc3b45/src/transformers/generation_utils.py#L1528
where `decoder_input_ids` is set instead of `input_ids`
https://github.com/huggingface/transformers/blob/234cfefbb083d2614a55f6093b0badfb2efc3b45/src/transformers/models/t5/modeling_t5.py#L1678
while the `forward` function, which is used in the `generation_utils.py` expects `input_ids` parameter, not `decoder_input_ids`https://github.com/huggingface/transformers/blob/234cfefbb083d2614a55f6093b0badfb2efc3b45/src/transformers/generation_utils.py#L1531
| 10-21-2021 19:20:46 | 10-21-2021 19:20:46 | Hi,
The sample method is used by both decoder only (GPT2 like) models and seq2seq (encoder-decoder) models like T5. With encoder-decoder models, during generation, the encoder is called only once and the decoder is used in the generation loop which expects `decoder_input_ids`. Since T5 is encoder-decoder it accepts both `input_ids` and `decoder_input_ids`.
And the error is because, for generation with seq2seq models we first need to pass the `input_ids` through the encoder, get the `encoder_hidden_states` and which are then passed to the decoder. If we don't pass `encoder_hidden_states` the T5 model expects `input_ids` or `input_embeds` to be able to call the encoder and get the `encoder_hidden_states`, it raises an error if none of these are passed. Which is the issue here.
For seq2seq models like T5 I would recommend you to use `generate` method directly and pass `do_sample=True` for sampling instead of directly using the `sample` method. Otherwise, you'll need to manually call the encoder to get the `encoder_hidden_states`, prepare the initial `decoder_input_ids`, and then call the `sample` method.
Hope this helps!
<|||||>Thank you for the detailed response. `generate` is working properly with `do_sample=True` |
transformers | 14,104 | closed | Add Multiple Choice Pytorch Example for Hellaswag | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
Added example for `hellaswag` dataset.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-21-2021 17:35:48 | 10-21-2021 17:35:48 | One thing that I was wondering is to replacing `swag` script with `hellaswag` mainly because of the reason that `swag` has been shown to contain artifacts in the endings itself. This was shown in `hellaswag` paper itself, so example of `hellaswag` would be more useful to researchers compared to `swag` example. `swag` is comparatively less used in papers nowadays. If you don't feel this way, feel free to close this PR.<|||||>I'm fine with switching datasets, if you want to amend your PR in that sense :-)<|||||>I've replaced `swag` with `hellaswag` to switch these datasets.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,103 | closed | Keras callback for prompting metrics | # What does this PR do?
It's a Keras callback that will print metrics at the end of every epoch.
It's tested on summarization example in the examples.
## Who can review?
@Rocketknight1 @LysandreJik
| 10-21-2021 15:37:51 | 10-21-2021 15:37:51 | Looks good so far! Here's what I'm thinking:
1. We should probably use the Callback's `self.model` instead of expecting a global `model` object - that might work in notebooks, but it's not great otherwise.
2. Looking at it, maybe you were right that we're asking the user to do too much work, and we should make predictions from the model in this callback too. It's definitely trickier, though, because Keras doesn't like concatenating variable-length predictions together. Maybe our code can predict on batches, break the batches into lists and then make big lists of predictions and labels?<|||||>Migrated to here: https://github.com/huggingface/transformers/pull/14867 |
transformers | 14,102 | closed | Fix typo in comment | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #14101
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-21-2021 15:12:36 | 10-21-2021 15:12:36 | Thanks for fixing! |
transformers | 14,101 | closed | Comments typo | https://github.com/huggingface/transformers/blob/3187228206cce052c5df0a8643fe85d2fd50e6a0/src/transformers/models/gpt2/modeling_gpt2.py#L811
Ou -> or | 10-21-2021 15:09:11 | 10-21-2021 15:09:11 | proposed fix #14102 |
transformers | 14,100 | closed | bert onnx to tensorrt wrong | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: ubuntu 18
- Python version: 4.11.3
- PyTorch version (GPU?):1.8.0 gpu
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?:
I get bert onnx ckpt by
`python -m transformers.onnx --model=bert-base-cased onnx/bert-base-cased/`
Then, I want to convert onnx to tensorrt
`TensorRT-8.2.0.6/bin/trtexec --verbose \
--workspace=20000 \
--minShapes="input_ids":1x1,'attention_mask':1x1,'token_type_ids':1x1 \
--optShapes="input_ids":4x64,"attention_mask":4x64,"token_type_ids":4x64 \
--maxShapes="input_ids":8x128,"attention_mask":8x128,"token_type_ids":8x128 \
--onnx=./onnx/bert-base-cased/model.onnx \
--saveEngine=./trt/bert-base-cased/model.trt \
--fp16`
However, I get error

Can you give me some help οΌ
| 10-21-2021 13:30:02 | 10-21-2021 13:30:02 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I am currently working on that there: https://github.com/ELS-RD/transformer-deploy
TBH, it's not a simple problem, but it brings lots of performance improvement compared to vanilla pytorch or even CUDA provider on ONNX Runtime<|||||>Hey @WallE-Chang this issue would be better suited for the `optimum` [library](https://github.com/huggingface/optimum), which is more focused on these framework-specific aspects. Would you mind opening your issue there?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,099 | closed | [Examples] Add audio classification notebooks | # What does this PR do?
Adds a notebook showing how to fine-tune Wav2Vec2 on Keyword Spotting
Notebook PR: https://github.com/huggingface/notebooks/pull/99 | 10-21-2021 12:40:59 | 10-21-2021 12:40:59 | |
transformers | 14,098 | closed | Replace assertion with ValueError exception | Replaces the assertion in hf_argparser.py with a ValueError exception.
Contributes towards fixing issue #12789
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 10-21-2021 11:52:13 | 10-21-2021 11:52:13 | |
transformers | 14,097 | open | `T5ForSequenceClassification` | # π Feature request
T5 to classify sequences by using only the encoder of T5 and a `ClassificationHead`.
## Motivation
This gives the benefits of fine-tuning a model with no maximum sequence length (useful for long sequence tasks) without having to load the decoder weights into memory/treat it as a generative task.
## Your contribution
I already have working code for this, and saw some requests for it in other forums (slack, torch, huggingface) so if it's a welcome addition I'd be happy to add it to the library.
| 10-21-2021 09:47:41 | 10-21-2021 09:47:41 | `T5ForMultipleChoice` would also be very helpful.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>This seems like a useful addition, especially considering the EncT5 [paper](https://arxiv.org/pdf/2110.08426.pdf)<|||||>any update on this? <|||||>Maybe of interest to @NielsRogge <|||||>Token Classification would also be very interesting when I think of evaluations for Big Science project.<|||||>But w.r.t. sequence classification, shouldn't it be similar to the sequence classification that is used for the BART model, as seen here:
https://github.com/huggingface/transformers/blob/7799b6128feb17b63c47ed77a71fa367d26492d2/src/transformers/models/bart/modeling_bart.py#L1415-L1530
:thinking: <|||||>Hi. I have done this at https://github.com/subhalingamd/transformers/commit/82db59d553ca870fed69e619d57286bcbed04337. But the list of uninitialized weights doesn't seem convincing at all. Here is the list for reference (for `t5-small`):
```
Some weights of the model checkpoint at t5-small were not used when initializing T5ForSequenceClassification: ['encoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight', 'encoder.block.4.layer.1.layer_norm.weight', 'decoder.block.4.layer.0.layer_norm.weight', 'encoder.block.4.layer.1.DenseReluDense.wo.weight', 'decoder.block.4.layer.1.EncDecAttention.v.weight', 'encoder.block.0.layer.1.layer_norm.weight', 'encoder.block.1.layer.0.SelfAttention.o.weight', 'decoder.block.4.layer.0.SelfAttention.o.weight', 'decoder.block.0.layer.2.layer_norm.weight', 'decoder.block.2.layer.0.SelfAttention.o.weight', 'encoder.block.1.layer.1.DenseReluDense.wo.weight', 'decoder.block.2.layer.1.EncDecAttention.k.weight', 'encoder.block.2.layer.1.DenseReluDense.wo.weight', 'decoder.final_layer_norm.weight', 'decoder.block.2.layer.1.EncDecAttention.q.weight', 'decoder.block.2.layer.0.SelfAttention.k.weight', 'decoder.block.5.layer.0.SelfAttention.o.weight', 'encoder.block.0.layer.1.DenseReluDense.wi.weight', 'encoder.block.4.layer.0.SelfAttention.k.weight', 'decoder.block.1.layer.1.EncDecAttention.o.weight', 'encoder.block.1.layer.0.SelfAttention.v.weight', 'encoder.block.1.layer.0.SelfAttention.k.weight', 'decoder.block.3.layer.0.layer_norm.weight', 'encoder.block.1.layer.0.layer_norm.weight', 'encoder.block.4.layer.0.layer_norm.weight', 'decoder.block.5.layer.0.SelfAttention.v.weight', 'decoder.block.3.layer.1.EncDecAttention.v.weight', 'encoder.block.2.layer.0.layer_norm.weight', 'encoder.block.3.layer.0.SelfAttention.k.weight', 'decoder.block.1.layer.1.EncDecAttention.k.weight', 'encoder.block.3.layer.1.DenseReluDense.wo.weight', 'encoder.block.3.layer.1.layer_norm.weight', 'encoder.block.0.layer.0.layer_norm.weight', 'decoder.block.2.layer.1.layer_norm.weight', 'decoder.block.2.layer.2.layer_norm.weight', 'decoder.block.0.layer.1.EncDecAttention.v.weight', 'encoder.final_layer_norm.weight', 'decoder.block.5.layer.1.EncDecAttention.q.weight', 'encoder.block.5.layer.1.layer_norm.weight', 'decoder.block.4.layer.0.SelfAttention.v.weight', 'encoder.block.2.layer.0.SelfAttention.k.weight', 'encoder.block.3.layer.0.layer_norm.weight', 'encoder.block.0.layer.0.SelfAttention.o.weight', 'decoder.block.0.layer.1.layer_norm.weight', 'decoder.block.3.layer.1.EncDecAttention.k.weight', 'encoder.block.2.layer.0.SelfAttention.v.weight', 'encoder.block.4.layer.0.SelfAttention.q.weight', 'encoder.block.5.layer.0.SelfAttention.q.weight', 'decoder.block.4.layer.1.EncDecAttention.q.weight', 'decoder.block.2.layer.1.EncDecAttention.v.weight', 'encoder.block.2.layer.0.SelfAttention.o.weight', 'encoder.block.3.layer.0.SelfAttention.v.weight', 'decoder.block.5.layer.1.EncDecAttention.k.weight', 'encoder.block.0.layer.0.SelfAttention.v.weight', 'encoder.block.5.layer.1.DenseReluDense.wo.weight', 'decoder.block.1.layer.0.SelfAttention.k.weight', 'encoder.block.0.layer.1.DenseReluDense.wo.weight', 'decoder.block.1.layer.1.layer_norm.weight', 'encoder.block.1.layer.0.SelfAttention.q.weight', 'encoder.block.0.layer.0.SelfAttention.q.weight', 'decoder.block.5.layer.0.SelfAttention.k.weight', 'encoder.block.1.layer.1.layer_norm.weight', 'encoder.block.0.layer.0.SelfAttention.k.weight', 'encoder.block.2.layer.1.DenseReluDense.wi.weight', 'decoder.block.2.layer.2.DenseReluDense.wo.weight', 'encoder.block.5.layer.0.SelfAttention.o.weight', 'decoder.block.3.layer.0.SelfAttention.q.weight', 'decoder.block.3.layer.1.EncDecAttention.q.weight', 'encoder.block.3.layer.0.SelfAttention.q.weight', 'decoder.block.2.layer.1.EncDecAttention.o.weight', 'decoder.block.0.layer.0.layer_norm.weight', 'decoder.block.0.layer.0.SelfAttention.q.weight', 'decoder.block.0.layer.1.EncDecAttention.relative_attention_bias.weight', 'encoder.block.3.layer.0.SelfAttention.o.weight', 'decoder.block.0.layer.1.EncDecAttention.o.weight', 'decoder.block.1.layer.0.SelfAttention.q.weight', 'decoder.block.1.layer.2.DenseReluDense.wo.weight', 'shared.weight', 'encoder.block.5.layer.0.SelfAttention.v.weight', 'decoder.block.3.layer.0.SelfAttention.o.weight', 'decoder.block.4.layer.0.SelfAttention.q.weight', 'decoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight', 'decoder.block.3.layer.2.DenseReluDense.wo.weight', 'decoder.block.4.layer.1.EncDecAttention.k.weight', 'decoder.block.4.layer.1.EncDecAttention.o.weight', 'decoder.block.3.layer.1.EncDecAttention.o.weight', 'encoder.block.1.layer.1.DenseReluDense.wi.weight', 'encoder.block.4.layer.1.DenseReluDense.wi.weight', 'decoder.block.5.layer.2.DenseReluDense.wo.weight', 'encoder.block.4.layer.0.SelfAttention.v.weight', 'decoder.block.1.layer.0.SelfAttention.v.weight', 'decoder.block.5.layer.0.SelfAttention.q.weight', 'decoder.block.4.layer.2.DenseReluDense.wi.weight', 'decoder.block.0.layer.0.SelfAttention.o.weight', 'decoder.block.5.layer.0.layer_norm.weight', 'encoder.block.4.layer.0.SelfAttention.o.weight', 'decoder.block.3.layer.1.layer_norm.weight', 'decoder.block.3.layer.2.DenseReluDense.wi.weight', 'decoder.block.1.layer.2.layer_norm.weight', 'decoder.block.5.layer.2.layer_norm.weight', 'decoder.block.1.layer.1.EncDecAttention.v.weight', 'encoder.block.5.layer.1.DenseReluDense.wi.weight', 'encoder.block.5.layer.0.SelfAttention.k.weight', 'decoder.block.0.layer.2.DenseReluDense.wi.weight', 'decoder.block.5.layer.1.layer_norm.weight', 'decoder.block.5.layer.1.EncDecAttention.v.weight', 'encoder.block.3.layer.1.DenseReluDense.wi.weight', 'decoder.block.2.layer.0.SelfAttention.q.weight', 'decoder.block.4.layer.1.layer_norm.weight', 'decoder.block.2.layer.0.SelfAttention.v.weight', 'decoder.block.4.layer.2.DenseReluDense.wo.weight', 'decoder.block.5.layer.1.EncDecAttention.o.weight', 'decoder.block.5.layer.2.DenseReluDense.wi.weight', 'decoder.block.0.layer.0.SelfAttention.v.weight', 'decoder.block.2.layer.2.DenseReluDense.wi.weight', 'decoder.block.1.layer.0.SelfAttention.o.weight', 'decoder.block.3.layer.0.SelfAttention.k.weight', 'decoder.block.0.layer.2.DenseReluDense.wo.weight', 'decoder.block.0.layer.1.EncDecAttention.k.weight', 'encoder.block.2.layer.0.SelfAttention.q.weight', 'decoder.block.1.layer.1.EncDecAttention.q.weight', 'encoder.block.5.layer.0.layer_norm.weight', 'decoder.block.0.layer.1.EncDecAttention.q.weight', 'decoder.block.1.layer.0.layer_norm.weight', 'decoder.block.4.layer.0.SelfAttention.k.weight', 'encoder.block.2.layer.1.layer_norm.weight', 'decoder.block.0.layer.0.SelfAttention.k.weight', 'decoder.block.3.layer.2.layer_norm.weight', 'decoder.block.1.layer.2.DenseReluDense.wi.weight', 'decoder.block.3.layer.0.SelfAttention.v.weight', 'decoder.block.4.layer.2.layer_norm.weight', 'decoder.block.2.layer.0.layer_norm.weight']
- This IS expected if you are initializing T5ForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing T5ForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of T5ForSequenceClassification were not initialized from the model checkpoint at t5-small and are newly initialized: ['model.encoder.block.5.layer.0.SelfAttention.k.weight', 'model.decoder.block.4.layer.1.EncDecAttention.k.weight', 'model.decoder.block.4.layer.0.SelfAttention.k.weight', 'model.decoder.block.4.layer.2.DenseReluDense.wi.weight', 'model.decoder.block.3.layer.1.EncDecAttention.v.weight', 'model.decoder.block.0.layer.1.EncDecAttention.v.weight', 'model.decoder.block.5.layer.0.SelfAttention.v.weight', 'model.encoder.block.4.layer.0.SelfAttention.v.weight', 'model.decoder.block.5.layer.0.SelfAttention.q.weight', 'model.decoder.block.2.layer.1.EncDecAttention.k.weight', 'model.encoder.block.1.layer.0.SelfAttention.q.weight', 'model.encoder.block.2.layer.1.DenseReluDense.wo.weight', 'model.encoder.block.3.layer.0.SelfAttention.q.weight', 'model.decoder.block.0.layer.1.EncDecAttention.k.weight', 'model.decoder.block.3.layer.2.DenseReluDense.wi.weight', 'model.encoder.block.4.layer.0.SelfAttention.k.weight', 'model.encoder.block.4.layer.0.layer_norm.weight', 'model.decoder.block.0.layer.0.layer_norm.weight', 'model.decoder.block.4.layer.1.EncDecAttention.v.weight', 'model.decoder.block.3.layer.1.EncDecAttention.k.weight', 'model.decoder.block.2.layer.0.SelfAttention.v.weight', 'model.encoder.block.0.layer.1.layer_norm.weight', 'model.decoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight', 'model.encoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight', 'model.decoder.block.1.layer.0.SelfAttention.o.weight', 'model.decoder.block.1.layer.0.layer_norm.weight', 'model.encoder.block.4.layer.1.DenseReluDense.wo.weight', 'model.decoder.block.0.layer.0.SelfAttention.k.weight', 'model.decoder.block.0.layer.2.DenseReluDense.wo.weight', 'model.decoder.block.5.layer.1.EncDecAttention.o.weight', 'model.decoder.block.2.layer.0.SelfAttention.o.weight', 'model.decoder.block.3.layer.0.SelfAttention.v.weight', 'model.decoder.block.0.layer.1.EncDecAttention.o.weight', 'classification_head.out_proj.weight', 'model.decoder.block.2.layer.0.SelfAttention.q.weight', 'model.decoder.block.4.layer.0.SelfAttention.v.weight', 'model.encoder.block.3.layer.0.SelfAttention.v.weight', 'model.encoder.block.5.layer.0.SelfAttention.o.weight', 'model.decoder.block.2.layer.0.SelfAttention.k.weight', 'model.decoder.block.3.layer.1.EncDecAttention.o.weight', 'model.encoder.block.2.layer.0.SelfAttention.v.weight', 'model.decoder.block.1.layer.2.DenseReluDense.wi.weight', 'model.decoder.block.4.layer.1.layer_norm.weight', 'model.decoder.block.1.layer.1.EncDecAttention.q.weight', 'model.decoder.block.5.layer.1.EncDecAttention.q.weight', 'model.decoder.block.5.layer.2.DenseReluDense.wi.weight', 'model.encoder.embed_tokens.weight', 'model.encoder.block.1.layer.1.DenseReluDense.wo.weight', 'model.encoder.block.1.layer.1.DenseReluDense.wi.weight', 'model.encoder.block.3.layer.0.layer_norm.weight', 'model.encoder.block.0.layer.1.DenseReluDense.wo.weight', 'model.decoder.block.3.layer.2.layer_norm.weight', 'model.encoder.block.5.layer.1.layer_norm.weight', 'model.encoder.block.3.layer.1.DenseReluDense.wo.weight', 'model.encoder.block.4.layer.1.layer_norm.weight', 'model.decoder.block.3.layer.0.layer_norm.weight', 'model.encoder.block.0.layer.0.SelfAttention.k.weight', 'model.decoder.block.2.layer.0.layer_norm.weight', 'model.decoder.block.0.layer.1.EncDecAttention.q.weight', 'model.encoder.block.4.layer.0.SelfAttention.o.weight', 'model.decoder.block.2.layer.2.layer_norm.weight', 'model.decoder.block.2.layer.2.DenseReluDense.wo.weight', 'model.decoder.block.0.layer.0.SelfAttention.q.weight', 'model.decoder.block.0.layer.0.SelfAttention.v.weight', 'model.encoder.final_layer_norm.weight', 'model.encoder.block.0.layer.0.layer_norm.weight', 'model.encoder.block.3.layer.0.SelfAttention.k.weight', 'model.encoder.block.0.layer.0.SelfAttention.q.weight', 'model.encoder.block.2.layer.1.DenseReluDense.wi.weight', 'model.decoder.final_layer_norm.weight', 'model.decoder.block.4.layer.2.DenseReluDense.wo.weight', 'model.decoder.block.3.layer.0.SelfAttention.q.weight', 'model.encoder.block.2.layer.0.SelfAttention.o.weight', 'model.decoder.block.3.layer.0.SelfAttention.k.weight', 'model.encoder.block.0.layer.0.SelfAttention.o.weight', 'model.encoder.block.4.layer.1.DenseReluDense.wi.weight', 'model.decoder.block.0.layer.2.layer_norm.weight', 'model.decoder.block.1.layer.1.layer_norm.weight', 'model.encoder.block.5.layer.1.DenseReluDense.wo.weight', 'model.encoder.block.1.layer.0.SelfAttention.v.weight', 'model.decoder.block.1.layer.0.SelfAttention.v.weight', 'model.encoder.block.1.layer.1.layer_norm.weight', 'classification_head.dense.bias', 'model.decoder.block.2.layer.1.layer_norm.weight', 'model.decoder.block.3.layer.1.EncDecAttention.q.weight', 'model.decoder.block.5.layer.2.DenseReluDense.wo.weight', 'model.encoder.block.3.layer.1.layer_norm.weight', 'model.decoder.block.1.layer.1.EncDecAttention.k.weight', 'model.decoder.block.0.layer.0.SelfAttention.o.weight', 'model.encoder.block.2.layer.0.SelfAttention.q.weight', 'model.decoder.block.5.layer.0.SelfAttention.o.weight', 'model.decoder.block.4.layer.0.SelfAttention.o.weight', 'model.decoder.embed_tokens.weight', 'model.decoder.block.2.layer.2.DenseReluDense.wi.weight', 'model.encoder.block.3.layer.0.SelfAttention.o.weight', 'model.encoder.block.0.layer.1.DenseReluDense.wi.weight', 'model.encoder.block.1.layer.0.SelfAttention.o.weight', 'model.decoder.block.0.layer.1.layer_norm.weight', 'model.decoder.block.3.layer.0.SelfAttention.o.weight', 'classification_head.dense.weight', 'model.encoder.block.5.layer.0.SelfAttention.v.weight', 'model.decoder.block.5.layer.1.EncDecAttention.v.weight', 'model.decoder.block.3.layer.2.DenseReluDense.wo.weight', 'model.decoder.block.4.layer.1.EncDecAttention.o.weight', 'model.decoder.block.2.layer.1.EncDecAttention.o.weight', 'model.decoder.block.4.layer.1.EncDecAttention.q.weight', 'model.shared.weight', 'model.decoder.block.4.layer.0.layer_norm.weight', 'model.encoder.block.5.layer.0.layer_norm.weight', 'model.encoder.block.5.layer.1.DenseReluDense.wi.weight', 'model.decoder.block.2.layer.1.EncDecAttention.q.weight', 'model.decoder.block.3.layer.1.layer_norm.weight', 'model.decoder.block.5.layer.1.layer_norm.weight', 'model.encoder.block.2.layer.1.layer_norm.weight', 'model.encoder.block.0.layer.0.SelfAttention.v.weight', 'model.encoder.block.2.layer.0.SelfAttention.k.weight', 'model.decoder.block.4.layer.0.SelfAttention.q.weight', 'model.encoder.block.1.layer.0.layer_norm.weight', 'model.decoder.block.1.layer.1.EncDecAttention.v.weight', 'model.decoder.block.0.layer.2.DenseReluDense.wi.weight', 'model.encoder.block.4.layer.0.SelfAttention.q.weight', 'model.decoder.block.5.layer.2.layer_norm.weight', 'model.decoder.block.5.layer.1.EncDecAttention.k.weight', 'model.encoder.block.3.layer.1.DenseReluDense.wi.weight', 'classification_head.out_proj.bias', 'model.encoder.block.5.layer.0.SelfAttention.q.weight', 'model.decoder.block.1.layer.0.SelfAttention.q.weight', 'model.decoder.block.1.layer.1.EncDecAttention.o.weight', 'model.decoder.block.1.layer.2.DenseReluDense.wo.weight', 'model.encoder.block.2.layer.0.layer_norm.weight', 'model.decoder.block.1.layer.0.SelfAttention.k.weight', 'model.decoder.block.5.layer.0.layer_norm.weight', 'model.encoder.block.1.layer.0.SelfAttention.k.weight', 'model.decoder.block.4.layer.2.layer_norm.weight', 'model.decoder.block.1.layer.2.layer_norm.weight', 'model.decoder.block.5.layer.0.SelfAttention.k.weight', 'model.decoder.block.2.layer.1.EncDecAttention.v.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
In case of `BartForSequenceClassification` with `facebook/bart-large`, this is how it looks like:
```
Some weights of BartForSequenceClassification were not initialized from the model checkpoint at facebook/bart-large and are newly initialized: ['classification_head.dense.weight', 'classification_head.dense.bias', 'classification_head.out_proj.weight', 'classification_head.out_proj.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
Am I missing out on something, or is it just that something extra has to be done to import the model? π€<|||||>@stefan-it / @subhalingamd the code for `BartForSequenceClassification` loads both the encoder and decoder parts for BART, which doesn't follow from the EncT5 paper - `model` should be `T5Encoder` only.
My solution is [here](https://github.com/MetcalfeTom/transformers/pull/1/files), happy to push it, though it's a lot of duplicate code. Should some refactoring be performed between this and Bart?
The other addition from the EncT5 paper is that the encoder outputs are pooled to simulate the text-to-text nature of classification using NLG. This is not in my implementation but can be added.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@LysandreJik
can we expect this to be added sometime soon?<|||||>Hi,
I don't know if anyone is still working on this, but looking to recent works that uses just the encoder from T5 I implemented the class `T5ForSequenceClassification` and evaluated on GLUE.
Here is the repository with the code and results: https://github.com/osainz59/t5-encoder
Is there still an interest on implementing this into Transformers library?<|||||>Hi @osainz59 , your repo looks really interesting, did you also perform experiments on NER? I've recently added support for encoder-only fine-tuning into Flair library, and results are really promising for this kind of downstream task. However, both sequence and token classification would be awesome to have it in Transformers :hugs: <|||||>Hi @stefan-it , I did not test the `T5ForTokenClassification` yet since there is no direct comparison afaik to traditional T5. However I can run it on some datasets to ensure it works properly. Can you tell me which datasets might be of interest to test?<|||||>Hi @osainz59 I think one really interesting dataset would be the CoNLL-2003 (see https://huggingface.co/datasets/conll2003).
When testing the mT5 model series, the WikiANN (Rahimi splits from here: https://huggingface.co/datasets/wikiann) is also very interesting (train on English split only and test it on the other languages for comparisons with the mT5 paper) :)<|||||>Hi @stefan-it , I trained and evaluated the `T5ForTokenClassification` class on CoNLL-2003 and here are the results:
```
***** eval metrics *****
epoch = 25.0
eval_accuracy = 0.9916
eval_f1 = 0.9549
eval_loss = 0.0449
eval_precision = 0.9531
eval_recall = 0.9567
eval_runtime = 0:00:10.90
eval_samples = 3251
eval_samples_per_second = 298.079
eval_steps_per_second = 37.317
```
I think they are still a bit behind of RoBERTa, but at those levels of F1 is hard to decide. Nevertheless, I think these results suggests that T5-Enc could be an interesting addition to the Transformers library. <|||||>Hey @osainz59 thanks for reporting back! I would love to see this in Transformers directly!<|||||>> Hi, I don't know if anyone is still working on this, but looking to recent works that uses just the encoder from T5 I implemented the class `T5ForSequenceClassification` and evaluated on GLUE.
>
> Here is the repository with the code and results: https://github.com/osainz59/t5-encoder
>
> Is there still an interest on implementing this into Transformers library?
This is exactly what I am looking for!<|||||>I've opened a PR https://github.com/huggingface/transformers/pull/24726 for `T5ForSequenceClassification` following the structure of the `BartForSequenceClassification` so both Encoder and Decoder weights are being used.
Although based on the results shown in this thread it seems like we could also look into adding a version that only uses the Encoder as well. |
transformers | 14,096 | closed | Add BeitForSemanticSegmentation | # What does this PR do?
This PR is a follow-up of #12994, and adds the semantic segmentation head of [BEiT](https://arxiv.org/abs/2106.08254). It's the state-of-the-art model currently for semantic segmentation (i.e. the task of labeling each pixel of an image), on datasets like [ADE20k](https://groups.csail.mit.edu/vision/datasets/ADE20K/) and [CityScapes](https://www.cityscapes-dataset.com/) (see [this chart](https://paperswithcode.com/sota/semantic-segmentation-on-ade20k-val) on paperswithcode).
Now it's easily available with a HuggingFace API! :)
Models are on the hub: https://huggingface.co/models?search=ade-640
Here's a notebook for quick inference: https://colab.research.google.com/drive/1AS3z0plOhWWibBvDgsQkRbfJB73vSsR3?usp=sharing | 10-21-2021 09:32:37 | 10-21-2021 09:32:37 | @NielsRogge
Do you have plans to add an example fine-tuning script for the segmentation task?<|||||>Hi @kamalkraj,
Yes I do plan to add that. However, this will become easier once the Image feature will be available in the Datasets library.<|||||>Okay @NielsRogge
I can update the flax version of beit, after this PR merge |
transformers | 14,095 | closed | saving the model after each epoch completion | I am running run_summarization.py file with mt5 model, and wanted to save the check point after each epoch. So, I changed the trainer file in the bigbird_flax.py file, accordingly but it is not happening. I have attached file

| 10-21-2021 09:06:31 | 10-21-2021 09:06:31 | Hi,
Can you ask this question on our [forum](https://discuss.huggingface.co/) please? We like to keep Github issues for bugs/feature requests.
Thanks! |
transformers | 14,094 | closed | fix typo in license docstring |
# What does this PR do?
There is missing line in license docstring.
last line: "# limitations under the License." is missing
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-21-2021 08:40:25 | 10-21-2021 08:40:25 | |
transformers | 14,093 | closed | [ASR] Small fix model card creation | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
If config name is None the model card should be created anyways
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-21-2021 08:22:44 | 10-21-2021 08:22:44 | |
transformers | 14,092 | closed | Different result come from local model and API | Issue closed | 10-21-2021 07:27:00 | 10-21-2021 07:27:00 | |
transformers | 14,091 | closed | Replace assertions with ValueError exceptions | # What does this PR do?
Related to #12789 .
Replaced all assert statements in the file `src/transformers/models/detr/feature_extraction_detr.py` with ValueError exceptions
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-21-2021 05:53:47 | 10-21-2021 05:53:47 | I had to force-push for cleanup, caused due to multiple commits because of failing checks π
|
transformers | 14,090 | closed | Fix assertion in models | # What does this PR do?
Replace assertions in src/transformers/models/marian/convert_marian_to_pytorch.py and src/transformers/models/luke/convert_luke_original_pytorch_checkpoint_to_pytorch.py with ValueError
<!-- Remove if not applicable -->
Related to (#12789)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten | 10-21-2021 05:14:37 | 10-21-2021 05:14:37 | Thanks for your suggestion! @sgugger I've changed my commit. |
transformers | 14,089 | closed | Shift operation in loss computation for seq2seq model | Hello, I'm using the Seq2seqTrainer provided in Legacy here https://github.com/huggingface/transformers/blob/master/examples/legacy/seq2seq/seq2seq_trainer.py#L162
I got a question when I read the `_compute_loss` function as it provides three different ways to calculate the loss.
In general, for text generation task, the labels and the logits need to be shifted before calculating loss for the purpose of predicting the next token.
**I wonder whether the methods in line 163 and in line 171 are lack of the shifting operation.** As the Seq2seqTrainer split the `labels` from the `inputs`, the model with `labels=None` will not compute the loss and will not shift the logits and labels in the `forward` function.
```
logits = model(**inputs, use_cache=False)[0] # no shift in the forward function
loss = self.loss_fn(logits.view(-1, logits.shape[-1]), labels.view(-1)) # no shift in the loss computation
```
While the method in line 168 will do the shift as the `labels` is provided to the `forward` function.
Do I understand right? Hopeful to get your advice.
- Text generation: @patrickvonplaten
- Trainer: @sgugger
| 10-21-2021 02:56:45 | 10-21-2021 02:56:45 | Please note that this code is no longer maintained, so questions about it should be asked on the [forums](https://discuss.huggingface.co/) since we keep the issues for bugs and feature requests only.
As for your question, I'm not sure why you want to shift labels in a sequence-to-sequence problem. This Trainer is used for problems like summarization and translation, where there is no label shifting. If you want to apply it to a causal LM problem, you will need to shift your labels before sending them to the model.<|||||>Thanks for your fast reply.
Sorry for asking my question in the wrong place. I've opened a topic on the [forums](https://discuss.huggingface.co/t/seq2seq-loss-computation-in-trainer/10988).
As for your reply, I don't understand why there is no need for shifting in seq2seq problem. The decoder inputs and the labels are identicals. I think that the labels need to be shifted by one position, because we need to predict the next token referring to the decoder inputs.
Perhaps we could further discuss this question on the [forums](https://discuss.huggingface.co/t/seq2seq-loss-computation-in-trainer/10988)?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,088 | closed | Change asserts in src/transformers/models/xlnet/ to raise ValueError | # What does this PR do?
Replaces asserts in:
src/transformers/models/xlnet/configuration_xlnet.py and src/transformers/models/xlnet/modeling_tf_xlnet.py
to a ValueError exception.
Contributes towards fixing issue #12789
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
### Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
| 10-20-2021 23:58:49 | 10-20-2021 23:58:49 | |
transformers | 14,087 | closed | Fix broken link in the translation section of task summaries | # What does this PR do?
Fixes the broken link to the document that describes approaches to fine-tune models for a translation task.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-20-2021 18:56:04 | 10-20-2021 18:56:04 | |
transformers | 14,086 | closed | Unexpected sequences_scores in BeamSearchDecoderOnlyOutput | ## Environment info
- `transformers` version: 4.11.3
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyTorch version (GPU?): 1.9.0+cu111 (False)
- Tensorflow version (GPU?): 2.6.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@qqaatw
## Information
This is a follow up to issue https://github.com/huggingface/transformers/issues/14065. I calculate the probability of each generated token conditional on the previous generated tokens as suggested in that previous issue:
```python
import torch
from transformers import AutoModelForCausalLM
from transformers import AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("gpt2", return_dict_in_generate=True)
tokenizer = AutoTokenizer.from_pretrained("gpt2")
input_ids = tokenizer("Today is a nice day", return_tensors="pt").input_ids
generated_outputs = model.generate(input_ids, num_beams=2, max_length=8, output_scores=True)
gen_ids = generated_outputs["sequences"][0, input_ids.shape[-1]:]
vocab_size = generated_outputs["scores"][0].shape[-1]
print(gen_ids) # tensor([ 11, 475, 314])
# Here we find out at each time-step, which beam each generated id belongs to.
values, indices = torch.topk(generated_outputs["scores"][0].view(-1), k=2)
print(values, (indices % vocab_size), (indices / vocab_size).long()) # tensor([-1.5148, -1.8792]) tensor([329, 11]) tensor([0, 0])
values, indices = torch.topk(generated_outputs["scores"][1].view(-1), k=2)
print(values, (indices % vocab_size), (indices / vocab_size).long()) # tensor([-3.6481, -3.9508]) tensor([475, 262]) tensor([1, 0])
values, indices = torch.topk(generated_outputs["scores"][2].view(-1), k=2)
print(values, (indices % vocab_size), (indices / vocab_size).long()) # tensor([-5.6957, -5.8748]) tensor([314, 340]) tensor([0, 0])
# So we know at time-step 1, the token id `475` belongs to beam 1, others belong to beam 0.
logprob_gen_token0 = generated_outputs["scores"][0][0, gen_ids[0]]
logprob_gen_token1 = generated_outputs["scores"][1][1, gen_ids[1]] - logprob_gen_token0
logprob_gen_token2 = generated_outputs["scores"][2][0, gen_ids[2]] - logprob_gen_token0 - logprob_gen_token1
print(logprob_gen_token0.exp(), logprob_gen_token1.exp(), logprob_gen_token2.exp())
# Outputs:
# tensor(0.1527) tensor(0.1705) tensor(0.1290)
```
Now I look at the `sequences_scores`:
```python
# tensor([0.4907])
generated_outputs["sequences_scores"].exp()
```
I would have expected the `sequences_scores` to be equal to the product of the probabilities of each generated token conditional on the previous generated tokens: 0.1527 * 0.1705 * 0.1290 = 0.0034.
I'm probably misunderstanding something. Thanks in advance for your help!
## To reproduce
See this colab:
https://colab.research.google.com/drive/11rRAFuNycLLDiDDwU02mBgXjpBOXCe4P#scrollTo=55CjTHwLc_gE
## Expected behavior
I would have expected the `sequences_scores` to be equal to the product of the probabilities of each generated token conditional on the previous generated tokens: 0.1527 * 0.1705 * 0.1290 = 0.0034 | 10-20-2021 18:51:32 | 10-20-2021 18:51:32 | ```
# Get average of log probs.
sum_logprob = sum((logprob_beam0_gen_token0, logprob_beam0_gen_token1, logprob_beam0_gen_token2))
seq_scores = (sum_logprob / generated_outputs["sequences"].shape[-1])
print(seq_scores == generated_outputs["sequences_scores"][0]) # True
```<|||||>Thanks for the clarification! Do you know of any references that explain why the sequences_scores is defined this way? If we compare generations for different prompts using this definition, then a generation with a longer prompt may get a lower score even if it is more confident in its generated tokens than a generation with a shorter prompt and less confidence in its generated tokens. Obviously, I can calculate different scores for myself, but just wondering about the motivation for this definition. Thanks again!<|||||>Good question!
Based on my understanding, the conditional generation functionality of `transformers` can be used on pure decoder architectures like `GPT2` or encoder-decoder architectures such as `Bert2Bert`.
In the former case, the prompt is concatenated with generated outputs as that is how CLM works, so you might be wondering if the prompt is longer, the sequence score will be lower. However, that is based on an assumption that different pairs of prompt and generated outputs are being compared, and I'm not 100% sure whether this direct comparison is hypothetically reasonable.
In the later case, a prompt is separated from generated outputs, i.e. the prompt is firstly inputted into an encoder, and then via cross-attention, a decoder amalgamates encoder outputs and decoder inputs to generate output_ids autoregressively. Therefore, because the prompt does not exist in decoder outputs, this calculation works as you expected, i.e. the sequence score is not affected by the prompt's length.
Maybe this calculation can be adjusted to fit your use-case. That is, if the prompt is concatenated with generated outputs, then we only use generated sequence length to calculate averaged log probability.
cc @patrickvonplaten <|||||>Thanks for answering so far @qqaatw!
A couple of things here @hacobe,
it seems like you are using `beam_sample` with `gpt2` (`do_sample=True` + `num_beams > 1`) which is a very exotic use case. Since you are essentially sampling the output token each time, you can't assume that the `topk()` will return you the actual sampled token.
Instead you should rather use `generate(...)` with `do_sample=False` and check again<|||||>Thanks for taking a look at this @patrickvonplaten!
I don't set `do_sample` in `generate(...)` and it defaults to `False`. Does your comment still apply?<|||||>By the way, my goal is just to get the probabilities of the tokens that are generated from GPT2 using beam search. The code snippet in the "Information" section of the first comment was suggested by @qqaatw in a [previous issue](https://github.com/huggingface/transformers/issues/14065).<|||||>See: https://discuss.huggingface.co/t/generation-probabilities-how-to-compute-probabilities-of-output-scores-for-gpt2/3175/15?u=patrickvonplaten
I'll improve the behavior for beam search<|||||>https://github.com/huggingface/transformers/pull/14654<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,085 | closed | Fix ignore_mismatched_sizes | # What does this PR do?
<!-- Remove if not applicable -->
Fixes #14073
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-20-2021 18:49:49 | 10-20-2021 18:49:49 | As you can see from the multiple failing tests, this makes the use case where we have a model with a task-specific head fail, so it looks like the fix is more complicated than just swapping the lines.<|||||>I believe these failures are due to added AutoModel (TFAutoModel / FlaxAutoModel) test cases. I'm finding an appropriate output of AutoModel that can test mismatched sizes, but it seems that there are some model-specific restrictions or assertions.<|||||>Indeed, some of the models are failing because of inner math between the hidden size and others, whereas others fail differently. You should adapt your test to just the vocab_size maybe?<|||||>Thanks for fixing the tests! You should also ignore the test for LayoutLmv2 (who wants a `bbox`) and there is the encoder-decoder template test to fix, I left a suggestion for that.<|||||>Fixed, thanks for the suggestions.<|||||>I agree with @patrickvonplaten. Maybe we can rename `add_prefix` to `add_prefix_to_init_model` as well to make it more clear.<|||||>I agree with @patrickvonplaten suggestion. Let's merge this PR once the comment is addressed, and then we can do the renaming in a followup PR!<|||||>You will need to include [this commit](https://github.com/huggingface/transformers/commit/0f502682fb08c7dac5b255903f63dcd9cc2d68eb) in your PR branch as the latest release of PyTorch 1.10 broke our CI. |
transformers | 14,084 | closed | [new model] `GPTMeg` (a clone of `GPT2` with a few tiny changes) | # What does this PR do?
This PR tries to address [this issue](https://github.com/bigscience-workshop/Megatron-DeepSpeed/issues/138) by cloning `HF's GPT2` to create `GPTMeg` with a few tiny changes for fp16 adaptation.
The 3 sources of divergence are:
1. `layer_norm` override to be forced to be done in fp32 - as `MixedFusedLayerNorm` (meg) performs it in fp32 and then casts back to fp16
2. overrides `gelu_fast` to use meg's version which uses `torch.jit.script` which under fp16 returns diverging output (the bwd function was needed to support jit and it's not the issue)
3. `_attn` override to use `torch.baddbmm` instead of `torch.matmul` - the divergence happens due to the alpha factor as it gets applied differently in the 2 ways.
Questions:
- need to decide whether `GPTMeg` is a good name - as BigScience is going to release several more variations of GPT2 - this particular model is however a straight Megatron-LM gpt2 without any changes from BigScience. We discovered that the HF gpt2 model is an almost exact fit wrt to producing the same logits under fp32, but quite a large divergence happens under fp16. So this variant does things differently under fp16.
- I'm not sure whether the tokenizer should be cloned as it's the same? Do we have models that re-use tokenizers from other models and don't provide their own? here GPT2Tok* is just fine
| 10-20-2021 16:11:28 | 10-20-2021 16:11:28 | Hey @sIncerass! If the tokenizer is identical, then there is no need to implement it once again: models may use different tokenizers by specifying the `tokenizer_name` value in their `config.json` file. If using the `GPT2Tokenizer`, you would put the following:
```
"tokenizer_name": "GPT2Tokenizer"
```
in either the `config.json` or the `tokenizer_config.json`.<|||||>@sIncerass, as this PR didn't get merged because of the delays with the licensing and naming, and meanwhile it has become too stale, we wanted to make sure you get credits for all the efforts you have invested into this PR, so we [added you as a contributor](https://github.com/huggingface/transformers/pull/17202/commits/e1db789c9348b90b2e7bc2c207f6e10b2b3a13c5) to the final incarnation of the BigScience architecture PR https://github.com/huggingface/transformers/pull/17202
and we can now safely close this PR. |
transformers | 14,083 | closed | FLAX-T5 - TPU not found Colab | Hello,
I'm using the code `run_t5_mlm_flax.py` on Google Colab in TPU mode.
I have the following problem:

And also:
`/usr/local/lib/python3.7/dist-packages/jax/__init__.py:27: UserWarning: cloud_tpu_init failed: ConnectionError(MaxRetryError("HTTPConnectionPool(host='metadata.google.internal', port=80): Max retries exceeded with url: /computeMetadata/v1/instance/attributes/agent-worker-number (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4ff0494790>: Failed to establish a new connection: [Errno 110] Connection timed out'))")) `
`This a JAX bug; please report an issue at https://github.com/google/jax/issues`
`_warn(f"cloud_tpu_init failed: {repr(exc)}\n This a JAX bug; please report "`
The TPU is not found, and the code switch in CPU mode. I'm using these libraries:
`pip install datasets`
`pip install transformers`
`pip install flax`
`pip install optax`
and also this configuration I read:
`import jax.tools.colab_tpu`
`jax.tools.colab_tpu.setup_tpu()`
`print(jax.local_devices())`
`export XRT_TPU_CONFIG="localservice;0;localhost:51011"`
`unset LD_PRELOAD`
`USE_TORCH=0`
How can I do to use this code on Colab or to use a FLAX-T5 with TPU on Colab?
Thank you! | 10-20-2021 12:11:55 | 10-20-2021 12:11:55 | cc @patil-suraj <|||||>Hi @IVannarI!
The `run_t5_mlm_flax.py` should run on colab TPU. Could you maybe share the colab so we can reproduce the issue?
Also, `jax.tools.colab_tpu.setup_tpu()` should be called before any other code. Could you run this in the notebook and see if it detects the TPU ?<|||||>Hi @patil-suraj, thank your for you support!
`jax.tools.colab_tpu.setup_tpu()` detects the TPU in colab.
This is the link of a colab, with some comments: https://colab.research.google.com/drive/1e8TrnPtLWU16dImozCuh-tt_OSxbdzDH?usp=sharing
I put a dummy dataset, that will give an error, but it's to see the TPU problem.
Running, i have this:

<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi guys,
@GenVr @patil-suraj did you find a solution?
Getting the same errors running run_t5_mlm_flax.py on Colab

jax.local_devices() finds the device but the script switches to CPU
<|||||>@0syrys see [here](https://discuss.huggingface.co/t/how-to-use-tpu-for-model-training-using-example-script-run-mlm-py/15510), it's solved<|||||>@GenVr
thanks for the help!
Unfortunately it's still not working
Seems to me the xla_spawn is only needed for the pytorch scrips?
Also the flax version run_t5_mlm_flax.py does not take a "--tpu_num_cores 8" argument or similar
So still a bit clueless
I created a public Colab with all the needed files in it:
https://drive.google.com/drive/folders/19TnTy_-h-MfccWWwKkGrll4wsN5X64CE?usp=sharing
For now the flax script ran just fine on the Colab GPU
But when I scale up the model size TPU would be nice I guess...
Can also try to switch to generic pytorch or tensorflow scripts for TPU training
But the whole T5 Preprocessing Pipeline is just perfect for what I'm doing
<|||||>@0syrys Hello. I'm meeting the same issues as you mentioned here. "run_t5_mlm_flax.py" script can't find Colab TPU. I found the link your provided has broken. Is it possible to explain more about how to solve this issue? Really appreciate. Thanks!<|||||>Found the issue. We need to call
```python
import jax.tools.colab_tpu
jax.tools.colab_tpu.setup_tpu()
```
in the script before importing anything JAX related. Calling `setup_tpu()` in the colab and then launching the script won't work because these are two different processes. So adding these two lines in the script before any JAX/Flax import should fix this issue.
<|||||>@patil-suraj
Thanks that's it! |
transformers | 14,082 | closed | Fix convert for newer megatron-lm bert model | # What does this PR do?
Because both GPT2 and BERT share the same underlying issue of different tensor ordering, similar modifications in [convert_megatron_gpt2_checkpoint.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py) are needed to convert newer Megatron-LM BERT models.
I actually tested that this fix was necessary to fine tune Megatron-LM BERT correctly with transformers API.
## Who can review?
@LysandreJik @jdemouth | 10-20-2021 11:36:23 | 10-20-2021 11:36:23 | I believe @stas00 has worked on a conversion between Megatron and HF Transformers - Stas, can you confirm you have used it and if so, can you take a look at this PR?<|||||>Oh, I completely forgot I had a related PR waiting in Draft mode ;) Switched it from Draft https://github.com/huggingface/transformers/pull/13928
So, yes, did a lot of work on GPT2/Megatron recently.
No, haven't done any work on Bert/Megatron, so I'm not aware of the nuances to qualify as a reviewer for this one.
@yoquankara, from a quick look I'd only suggested to add a proper way of saving config, which should be:
```
config.save_pretrained(path)
```
instead of the manual saving, which misses some important bits.
and may be the tokenizer file addition code as well?
For reference of the 2 changes see the very end of my PR https://github.com/huggingface/transformers/pull/13928/files
Perhaps setting `config.tokenizer_class` as well.
and you need to run `make fixup` and push again to fix the style.<|||||>@stas00 Thank you for your review and the pointer to a proper way of saving config!
Regarding tokenizer, Nvidia's Megatron Bert models are using their owns models while their GPT2 tokenizer is using the default one for `gpt2`. So I didn't add a similar `tokenizer_model_name` here.
https://huggingface.co/nvidia/megatron-bert-cased-345m
https://huggingface.co/nvidia/megatron-bert-uncased-345m
I've also run `make fixup` but nothing was wrong...
```make: Nothing to be done for `src/transformers/models/megatron_bert/convert_megatron_bert_checkpoint.py'.```
I will investigate more about `ci/circleci: check_code_quality`.<|||||>Code style has been fixed.<|||||>@LysandreJik @stas00
What else should I do to make progress on this PR?<|||||>I suppose we just want to make sure that updated script works with:
1. the old official nvidia bert checkpoint release
2. with a model trained on the modern Meg-LM code-base
so to test for both:
1. convert to HF
2. load in HF and test it works. the definition of works could be checking that it gives the same loss? or generates the same output on Meg and HF sides
At least that's the validation process I did before proposing the GPT2 PR.<|||||>Thank you, totally makes sense. I'll find time to also test the old official model and post the validation result when finished.<|||||>@LysandreJik - should we just merge this? As even if there are issue here it's an improvement over the original version.<|||||>@jdemouth, could you comment on this if you have a bit of bandwidth?
Otherwise, let's go ahead and merge this next week.<|||||>ping<|||||>@yoquankara were you able to run tests to validate that it does not break the code for the older models? <|||||>I have a feeling @yoquankara has either given up on our process or is busy with work/life.
@jdemouth, Should we merge it and deal with any potential problems if they arise?
<|||||>@stas00 - I agree. I think we should merge and we'll fix things if something breaks.<|||||>@stas00 @jdemouth
I apologize for not being able to proceed. This was always in my mind, but I have been quite occupied by other things.
Thank you for your understanding and merging decision. I will try my best to follow up when things go better.<|||||>Thank you for taking the time to share your needs, @yoquankara
That is totally understandable - you had no obligation to do anything - we just wanted to know whether you wanted to continue being involved. But otherwise, we hope to see you in another PR in the future.<|||||>_io.UnsupportedOperation: seek. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead._
Getting this error while converting the BERT checkpoint<|||||>You probably don't realize it but your comment is not actionable, @kaushalshetty, since we have no idea what you did.
Please file a proper bug report so that we could reproduce the problem and then it'd be possible to act on it and help you with your need. Make sure to include the full traceback in your report.
Thank you!<|||||>I am so sorry. I understand that. My bad !
So here's what I have :
- `transformers` version: 4.17.0
- Platform:
- Python version: 3.6
- PyTorch version (GPU?): 1.10.1+cu102
- Tensorflow version (GPU?): 2.6
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART, BART, Marian, Pegasus: @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patil-suraj, @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@stas00 @LysandreJik
## Information
Model I am using Megatron-BERT(megatron-bert-uncased-345m):
The problem arises when using:
* trying to install megatron through https://huggingface.co/nvidia/megatron-bert-uncased-345m .
The tasks I am working on is:
* to get megatron embeddings
## To reproduce
Steps to reproduce the behavior:
1. export MYDIR=$HOME
2. git clone https://github.com/huggingface/transformers.git $MYDIR/transformers
3. mkdir -p $MYDIR/nvidia/megatron-bert-uncased-345m
4. wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_bert_345m/versions/v0.1_uncased/zip -O $MYDIR/nvidia/megatron-bert-uncased-345m/checkpoint.zip
5. python3 $MYDIR/transformers/src/transformers/models/megatron_bert/convert_megatron_bert_checkpoint.py $MYDIR/nvidia/megatron-bert-uncased-345m/checkpoint.zip. This gives me the below error.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
Traceback (most recent call last):
File "/opt/omniai/software/Miniconda/lib/python3.6/site-packages/torch/serialization.py", line 308, in _check_seekable
f.seek(f.tell())
io.UnsupportedOperation: seek
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/omniai-jupyter/transformers/src/transformers/models/megatron_bert/convert_megatron_bert_checkpoint.py", line 327, in <module>
main()
File "/home/omniai-jupyter/transformers/src/transformers/models/megatron_bert/convert_megatron_bert_checkpoint.py", line 296, in main
input_state_dict = torch.load(pytorch_dict, map_location="cpu")
File "/opt/omniai/software/Miniconda/lib/python3.6/site-packages/torch/serialization.py", line 594, in load
with _open_file_like(f, 'rb') as opened_file:
File "/opt/omniai/software/Miniconda/lib/python3.6/site-packages/torch/serialization.py", line 235, in _open_file_like
return _open_buffer_reader(name_or_buffer)
File "/opt/omniai/software/Miniconda/lib/python3.6/site-packages/torch/serialization.py", line 220, in __init__
_check_seekable(buffer)
File "/opt/omniai/software/Miniconda/lib/python3.6/site-packages/torch/serialization.py", line 311, in _check_seekable
raise_err_msg(["seek", "tell"], e)
File "/opt/omniai/software/Miniconda/lib/python3.6/site-packages/torch/serialization.py", line 304, in raise_err_msg
raise type(e)(msg)
io.UnsupportedOperation: seek. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Expect megatron checkpoint gets converted to huggingface format.<|||||>That's excellent, but in the future please open a new Issue. Once a PR is merged or an Issue is closed it's very difficult to track those.
I tested your use case and it worked for me with python 3.8:
```
Extracting PyTorch state dictionary from "megatron-bert-uncased-345m/checkpoint.zip"
Converting
Saving config
Saving checkpoint to "megatron-bert-uncased-345m/pytorch_model.bin"
```
and it indeed fails with python-3.6:
```
Traceback (most recent call last):
File "../../transformers-master/src/transformers/models/megatron_bert/convert_megatron_bert_checkpoint.py", line 327, in <module>
main()
File "../../transformers-master/src/transformers/models/megatron_bert/convert_megatron_bert_checkpoint.py", line 296, in main
input_state_dict = torch.load(pytorch_dict, map_location="cpu")
File "/home/stas/anaconda3/envs/py36-pt18/lib/python3.6/site-packages/torch/serialization.py", line 579, in load
with _open_file_like(f, 'rb') as opened_file:
File "/home/stas/anaconda3/envs/py36-pt18/lib/python3.6/site-packages/torch/serialization.py", line 235, in _open_file_like
return _open_buffer_reader(name_or_buffer)
File "/home/stas/anaconda3/envs/py36-pt18/lib/python3.6/site-packages/torch/serialization.py", line 220, in __init__
_check_seekable(buffer)
File "/home/stas/anaconda3/envs/py36-pt18/lib/python3.6/site-packages/torch/serialization.py", line 311, in _check_seekable
raise_err_msg(["seek", "tell"], e)
File "/home/stas/anaconda3/envs/py36-pt18/lib/python3.6/site-packages/torch/serialization.py", line 304, in raise_err_msg
raise type(e)(msg)
io.UnsupportedOperation: seek. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead.
```
So `torch.load` w/ python-3.6 doesn't like the zip file handle. So here is a quick workaround for you:
```
$ unzip megatron-bert-uncased-345m/checkpoint.zip
Archive: megatron-bert-uncased-345m/checkpoint.zip
inflating: config.json
inflating: latest_checkpointed_iteration.txt
inflating: release/mp_rank_00/model_optim_rng.pt
$ python ../../transformers-master/src/transformers/models/megatron_bert/convert_megatron_bert_checkpoint.py release/mp_rank_00/model_optim_rng.pt
Extracting PyTorch state dictionary from "release/mp_rank_00/model_optim_rng.pt"
Converting
Saving config
Saving checkpoint to "release/mp_rank_00/pytorch_model.bin"
```
I'm passing the actual checkpoint file instead of the zip file in case it wasn't clear from the command line.
or you can upgrade to a higher python version, 3.6 is very old.
------------------------
@LysandreJik, @sgugger - what do we want to do here as a long term fix?
I propose to catch that it's python-3.6 and refuse to deal with the zipped checkpoint, asserting, asking to unzip it first?
the same will need to be done for `megatron_gpt`
There is no problem with with py-3.7 and higher.
<|||||>We might also start saying Transformers requires Python 3.7 or above since Python 3.6 is at the end of its life cycle.<|||||>oh, cool! thanks, @sgugger
so should we just change
https://github.com/huggingface/transformers/blob/8481ecefbd7e701bc061b321cb1695d16eac95a9/setup.py#L135
to 3.7.0? then this Issue will get auto-resolved.
@LysandreJik - is now a good time? |
transformers | 14,081 | closed | [Feature Contribution] Disjunctive Positive Constraint Decoding (adding `force_tokens` to `model.generate()`) | # π Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
**_"Disjunctive Positive Constraint Decoding"_** Algorithm proposed by a recent paper: [_Guided Generation of Cause and Effect_](https://www.ijcai.org/proceedings/2020/0502.pdf)
Currently the model.generation() method limits the user to just the highest probability outputs. Here the user is able to force diverse outputs by forcing the model to include diverse tokens across multiple generations.
This method is called "Disjunctive Positive Constraint Decoding", and it forces the `model.generate()` process to generate sequences with the highest probabilities under the constraint of needing to include a set of provided tokens.
This **"disjunctive"** method is powerful in that it can handle **lemmatizing these forced tokens.** For instance, when asking the model to autoregressively generate the completion tokens from "Babies cry because" and want to force the generation to include the word "lonely", it can induce the model to generate sequences like "Babies cry because they are lonely", as well as "Babies cry because of their loneliness".
I think that this could be implemented as:
```
model.generate(force_tokens=[["lonely", "loneliness", ...], ["happy", "happiness", ...]])
```
where the input to `force_tokens` is a 2D array, where each 1D array is a list of different forms of the desired token.
Otherwise it could be:
```
model.generate(force_tokens=["lonely", "happy"])
```
but in this case the transformers library would need to have a built-in lemmatization engine, which I think might be something better left for the practictioner to figure out (i.e. go with the first method instead).
## Motivation
Diversity in outputs from sequence generation of LMs have always been an active problem. Though usually diversity inducing methods involved some dual implementation with a VAE or modifying the training scheme, this feature would allow the practioner to induce diversity in a very _controllable_ way.
A large pretrained language model probably has the capacity to generate all kinds of different expressions given an input, but usually the generation gets limited to the highest probable outputs. Clearly one solution is to use sampling instead, but this goes back to the problem of controllability. This level of control is extremely useful in model implementations that aim to learn a syntactic transformations that need to preserve certain entities, or QA verbalizers where we have pre-existing knowledge of what the answer should be.
Instead of making it generate a lot of sequences and filtering out for desired ones, this would allow to force it to generate an output that we want, which a large LM probably can do well; even if it can't figure out a way, then we can just filter out the low probabability outputs based on a threshold.
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
I'm happy to submit a PR for the full implementation if there aren't any reasons to object this feature.
But I do believe I should get some feedback on this idea before proceeding with an implementation, since it's not exactly clear what's the best way to introduce this functionality to the library.
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| 10-20-2021 10:54:57 | 10-20-2021 10:54:57 | cc @patrickvonplaten @Narsil <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>This went under the radar sorry about this.
I'll let patrick discuss actual inclusion within transformers, but FYI, we're enabling generic `logits_processor` which should enable you to arbitrarily reassign logits during generation. https://github.com/huggingface/transformers/pull/12219
If you could create your implementation framed as a `LogitsProcessor` type of objects, that would make inclusion super simple (and also even if it does not get merged, usage should be quite smooth).<|||||>Very much agree with @Narsil - it would be great if you could add a simple `LogitsProcessor` class for you decoding method. This way we can surely merge it to `master`<|||||>Thanks for reviewing this thread @patrickvonplaten @Narsil
Though it'd be ideal if it can be solved with a simple custom `LogitsProcessor`, it seems like this problem requires at least a dedicated beam search function (`LexicallyConstrainedBeamSearch` / `DisjunctivlyConstrainedBeamSearch`).
Upon further research I realized that similar features already exist in Fairseq and Sockeye.
**Fairseq** implementation is introduced by this [readme](https://github.com/pytorch/fairseq/tree/main/examples/constrained_decoding) and the implementation is here ([LexicallyConstrainedBeamSearch](https://github.com/pytorch/fairseq/blob/main/fairseq/search.py#L210))
**Sockeye's** implementation is [here](https://github.com/awslabs/sockeye/blob/2bc83bd1580e8fa49acd329072c3b7ea9e400276/sockeye/lexical_constraints.py).
These implementations are based on mainly the following papers:
[Fast Lexically Constrained Decoding with Dynamic Beam Allocation for
Neural Machine Translation](https://arxiv.org/abs/1804.06609)
[Improved Lexically Constrained Decoding for Translation and Monolingual Rewriting](https://aclanthology.org/N19-1090/)
I feel like the fact that implementations exist in other major text generation libraries hints at the importance of such a function and similar ideas about "whitelisting" certain tokens (as opposed to blacklisting with bad_token_ids) have been discussed before in huggingface forums.
I think this is even more worthy of a cause since what I had proposed with this **"Disjunctive Positive Constraint Decoding"** approach is one step above all the above implementations in that it can handle lemmatized constraints.
For example, users using **Sockeye** or **Fairseq** can only force the model generation to include the word "rain" and will end up prevent it against generating the word "raining". On the other hand, this **disjunctive** approach is able to instead say "generate one of {rain, raining}" for better intuitive use of the function.
As one can imagine, implementing even just the simple lexically constraint approach found in fairseq and Sockeye requires a dedicated beam search function and it's much more complex than boosting / reducing logits at each step with a custom `LogitsProcessor`.
I'm wondering if such an approach makes the scale of this implementation too large and complex for merging to master. I'm personally more than willing to write the full implementation, with the boosted confidence since half the work is done with other libraries having similar implementations already.
<|||||>Hey @cwkeam,
I'm taking a look into this now (finally have some free time). Having looked at the papers, I think it actually makes sense to incorporate this generation method. @LysandreJik @sgugger @patil-suraj @yjernite @thomwolf what do you think? <|||||>@cwkeam,
One quick question. There seems to be a blog post explaining how to do constrained decoding by just using a `LogitsProcessor` class ([code](https://colab.research.google.com/drive/1ezT24sogpVyr2HJLOvXHzjv61JZJ1gMT?usp=sharing), [blog post](https://towardsdatascience.com/the-power-of-constrained-language-models-cf63b65a035d)). Could you explain why this is not sufficient for this case and what improvements we would see by implementing a whole new generation method? <|||||>Hi @patrickvonplaten, thanks for getting back!
From what I've looked into thus far, the following are the reasons I can think of right now. It mainly comes down to the fact that the scope of the `LogitsProcessor` is just one step in the generation process and outputs scores across the vocabulary for just the next step, while this constraint decoding method requires ranking, grouping, and filtering of stateful beams (tracking constraints satisfied by each beam).
## 1. Flexible Placement of Constraints
At least with just looking at the blog, the implementations provided are rather very simple in nature, which don't show the capacity needed for this decoding method. The code in the blog post is along the lines of:
```
among the next probable tokens, choose only those that have even characters.
if current_token.startsWith("a"): next token has to start with "b"
if current_token.startsWith("b"): next token has to start with "c"
else: next token has to start with "a"
```
The immediate problem is that this can't control for where in the sequence the generation satisfies the constraint.
In a case where you want a certain word to be included in the output, given that the scope of a `LogitsProcessor` is just **one step** in the generation process, it doesn't seem possible to me how one could implement the forcing of that word *somewhere appropriate in some future step*.
## 2. Supporting Multiple Constraints
Not only should this method be able to force those tokens at the most appropriate spot in the sequence (which is unknown at an arbitrary step in the process), it should also handle the most appropriate constraint at a time (equally unknown at an arbitrary step).
If there are `n` constraints (`{c1, c2, ..., Cn}`) and `T` tokens `{t1, t2, ..., tT}` in a gold output, we're essentially finding most probable position `i` that satisfies constraint_token `j` and most appropriate tokens for the rest of the positions. I personally can't see a way how this can be solved by only changing the scores of the vocabulary for step `t+1` at step `t` in the generation (the scope of `LogitsProcessor`).
Essentially we need a more global view of the generation at each step of the generation
## 2. Solution With Constraint States and Ranking & Filtering Beams Throughout The Generation
From [Improved Lexically Constrained Decoding for Translation and Monolingual Rewriting](https://aclanthology.org/N19-1090.pdf)
"The implementation of positively constrained decoding comprises two key pieces: tracking which of the supplied constraints each hypothesis has already generated, and ensuring progress through the constraints by dynamically allocating the beam to hypotheses that have generated different numbers of them.
...
The use of tries for recording constraint state, and thereby offsetting certain corner cases.
Each time a constraint is completed, the number is decremented, and nodes of the trie can be trimmed when they lead only to paths ending in zero counts.
...
In summary, we represent all the constraints as a compact trie. Each hypothesis in the decoder beam has its version of the trie. The set of active states in each hypothesisβ trie tracks all suffixes of the target words that match against the constraint trie. When a constraint is generated, its counter is decremented and zero paths are pruned."
From the existing [fairseq implementation](https://github.com/pytorch/fairseq/blob/main/fairseq/search.py#L416) (which doesn't have the additional feature of the "disjunctive" constraint that I'm proposing)
```
# STEP 3: Compute the "bank" for each candidate. This is the
# number of constraints it's generated. We need this so that
# we can do round-robin allocation of the beam across these
# banks. If C is the number of constraints, we select the best
# item in bank C, then the best in bank C-1, etc, followed by
# the 2nd-best in bank C, the 2nd-best in bank C-1, etc, and so
# on, until the maximum beam size. We accomplish this by
# creating a sort key and striping across the banks.
# STEP 5: Remove duplicates. The topk calls (overall and
# per-row) plus the per-row generation of constraints will
# produce duplicates. Here we remove them.
# STEP 6: Assign IDs round-robin across banks, sort, and
# truncate. Now that the candidates are sorted by (bank,
# score) and uniqed, we dynamically allocate the {beam_size}
# beam by striping across the candidates. These stripes will
# be used as sort keys to do round-robin selection. This is
# accomplished in a single pass with offsets. Sorting by
# highest-banks (furthest-along hypotheses) first ensures
# progress through the constraints.
#
# e.g., BANKS: 3 3 3 2 2 2 2 1 1 1 0 0
# OLD STRIPES: 0 1 2 0 1 2 3 0 1 2 0 1
# NEW STRIPES: 0 1+4 2+8 0+1 1+5 2+9 3+11 0+2 1+6 2+10 0+3 1+7
# = 0 5 10 1 6 11 13 2 7 12 3 8
#
# Sorting by this then gives the following banks:
#
# 3 2 1 0 3 2 1 0 3 2 1 2
#
# We'll take the top {beam_size} of these.
```
Let me know what you think!
I'm still very much open to a solution, if possible, using `LogitsProcessor` or a modification of it if needed, since clearly that'd be a much easier way of integrating this functionality into the package.
<|||||>Hey @cwkeam,
Thanks a lot for you very in-detail answer! We've discussed this internally and think actually it makes sense to add a new generation method for this.
Ideally this generation method could cover the generation methods proposed in the following papers:
- [Improved Lexically Constrained Decoding for Translation and Monolingual Rewriting](https://aclanthology.org/N19-1090.pdf)
- [Fast Lexically Constrained Decoding with Dynamic Beam Allocation for Neural Machine Translation](https://arxiv.org/abs/1804.06609)
- [Guided Generation of Cause and Effect ](https://www.ijcai.org/proceedings/2020/0502.pdf)
From an implementation point of view I think we should add a new "sub"-generation method here: https://github.com/huggingface/transformers/blob/927f654427833dbf1da03f0cc036eed66f1d2533/src/transformers/generation_utils.py#L2679
It would join the 5 existing "sub"-generation methods then which are:
- `greedy_search`
- `sample`
- `beam_sample`
- `beam_search`
- `group_search`
We might also have to add a new BeamSearcher class here: https://github.com/huggingface/transformers/blob/master/src/transformers/generation_beam_search.py
Think we could call it either `guided_beam_search` or `constrained_beam_search` and a couple of appropriate input arguments to `generate(...)` to trigger the "sub"-generation function<|||||>Would you be interested in giving this PR a try? I'm more than happy to help you whenever you're stuck!
Otherwise I would probably be able to find some time to give it a try in 1-2 months myself.<|||||>Hi @patrickvonplaten
Thanks for the consideration!
I'd be more than happy to make the full implementation. I'll try to push a PR within the next week or so with an initial PR.
<|||||>This sounds great! Please ping me whenever you need any help<|||||>@patrickvonplaten Hi, I'm almost complete with a full, robust implementation with full testing. I had retracted the previous PR because I felt that I submitted it a bit prematurely. Will you reopen this issue? I'm working on the documentation now.<|||||>if i want to make my generation result in a range char , how should i do ?
for example [A, B, C ] |
transformers | 14,080 | closed | Add SEW mappings to AutoTokenizer and AutoFeatureExtractor | # What does this PR do?
This is to support SEW and SEW-D in the speech recognition scripts. | 10-20-2021 10:54:40 | 10-20-2021 10:54:40 | Resolved by https://github.com/huggingface/transformers/pull/14079 |
transformers | 14,079 | closed | [ASR] Make speech recognition example more general to load any tokenizer | # What does this PR do?
Many Wav2Vec2-like models don't implement their own tokenizer. For such models, we need to rely on `tokenizer_class` in the config to load the correct tokenizer.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-20-2021 10:53:43 | 10-20-2021 10:53:43 | |
transformers | 14,078 | closed | gramformer installation error. | I am using VS Code for **gramformer** and Python 3.7 & Pip version **pip==20.1.1** and used the below link.
**pip install git+https://github.com/PrithivirajDamodaran/Gramformer.git**
I am getting this error, **How to resolve it**
ERROR: lm-scorer 0.4.2 has requirement transformers<3.0.0,>=2.9.0, but you'll have transformers 4.11.3 which is incompatible. | 10-20-2021 10:28:31 | 10-20-2021 10:28:31 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,077 | closed | Fix assert in src/transformers/data/datasets/language_modeling.py | # What does this PR do?
Replace assertion in src/transformers/data/datasets/language_modeling.py with ValueError
<!-- Remove if not applicable -->
Related to (#12789)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger | 10-20-2021 09:53:21 | 10-20-2021 09:53:21 | |
transformers | 14,076 | closed | Fix test_configuration_tie in FlaxEncoderDecoderModelTest | # What does this PR do?
Fix the `test_configuration_tie` in `FlaxEncoderDecoderModelTest`.
The `test_configuration_tie` in `FlaxEncoderDecoderModelTest` would fail, as shown in [run_tests_flax](https://app.circleci.com/pipelines/github/huggingface/transformers/29230/workflows/106da32f-9f2c-4cb0-99fc-dad7316701ac/jobs/292440). See below.
`FlaxEncoderDecoderModel` didn't have `encoder` and `decoder` - they are inside `FlaxEncoderDecoderModule`, and need an indirect way to access them.
However, it might be too much to add these just for a test without other interesting use case.
If we don't want to add this to `FlaxEncoderDecoderModel`, we could probably move the changes to be inside `test_configuration_tie`.
BTW, this test is `@slow`, so won't be run by CircleCI. I saw it is mentioned `CircleCI does not run the slow tests, but github actions does every night!` in [How to contribute to transformers?](https://huggingface.co/transformers/contributing.html).
So `test_configuration_tie` failed on github actions, and I am wondering when a slow test fails, what are the actions? Is it only for Hugging Face internal?
(I finally found the report under GitHub actions tab)
## CircleCI test results
```
def _check_configuration_tie(self, model):
> assert id(model.decoder.config) == id(model.config.decoder)
E AttributeError: 'FlaxEncoderDecoderModel' object has no attribute 'decoder'
``` | 10-20-2021 08:03:01 | 10-20-2021 08:03:01 | Thanks a lot for fixing the test! The implemented approach is clean and makes total sense. However, I'm not really in favor of exposing the encoder **module** this way. The reason is the following. In PyTorch callling `model.encoder` (of `model = EncoderDecoderModel(...)`) gives one the encoder class with all the weights and `PreTrainedModel` functions attached. However this would not be the case in Flax. `model.encoder.params` would not work since `model.encoder` is a Flax **module** not a model. It's not really possible to retrieve the Flax **model** actually in the Flax encoder decoder design IMO, so I would prefer to just not add such properties and rather change the config test.
@patil-suraj - could you take a look here as well?<|||||>I agree with @patrickvonplaten , especially the exposure here is only for the testing purpose.
By `change the config test`, do you mean we still want to keep the testing logic of `test_configuration_tie`, but getting the encoder/decoder modules inside it rather than exposing them in `FlaxEncoderDecoderModel`? (I am OK with this option.)
P.S. Currently, `FlaxVisionEncoderDecoderModelTest` doesn't have this test after our discussion in another thread.
https://github.com/huggingface/transformers/pull/13359#discussion_r735464105<|||||>Agree with @patrickvonplaten .
@ydshieh I think we could still keep the test. For this use case, we could use the `bind` method, which makes the flax module stateful so we could directly get the encoder/decoder.
```python3
module = model.module.bind(model.params)
enc_config = module.encoder.config
dec_config = module.decoder.config
```
WDYT @patrickvonplaten?<|||||>>
>
> Agree with @patrickvonplaten .
>
> @ydshieh I think we could still keep the test. For this use case, we could use the `bind` method, which makes the flax module stateful so we could directly get the encoder/decoder.
>
> ```python
> module = model.module.bind(model.params)
> enc_config = module.encoder.config
> dec_config = module.decoder.config
> ```
>
It works, thank you @patil-suraj .
https://app.circleci.com/pipelines/github/huggingface/transformers/29684/workflows/1392381e-9fbe-488d-9164-0bd4add6b393/jobs/299243
<|||||>I let you merge if you're happy with the PR @patil-suraj :-) |
transformers | 14,075 | closed | Add missing autocast() in Trainer.prediction_step() | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
On prediction_step() where `has_labels == True` and `self.use_amp == True`, it does not use `autocast()` in calling `self.compute_loss`, which triggers `RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same`. This commit fixes the issue.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-20-2021 07:05:20 | 10-20-2021 07:05:20 | |
transformers | 14,074 | closed | Replace assertions in src/transformers/data/datasets/language_modeling.py | # What does this PR do?
<!--
-->
The PR replaces the assertions in src/transformers/data/datasets/language_modeling.py with ValueError exceptions.
<!-- Remove if not applicable -->
Fixes # (#12789)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger | 10-20-2021 06:40:47 | 10-20-2021 06:40:47 | |
transformers | 14,073 | closed | ignore_mismatched_sizes do not work propoerly | - `transformers` version: 4.11.3
- Platform: Linux-4.19.117.bsk.5-amd64-x86_64-with-debian-10.6
- Python version: 3.7.3
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): 2.3.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
I'm trying to load a pretrained pytorch Bert model with a different type_vocab_size with the following code:
```python
from transformers import AutoConfig, AutoModel
name = 'bert-base-uncased'
config = AutoConfig.from_pretrained(name)
config.type_vocab_size = 5
model = AutoModel.from_pretrained(name, config = config, ignore_mismatched_sizes = True)
```
and got Runtime Error:
```
Traceback (most recent call last):
File "a.py", line 7, in <module>
model = AutoModel.from_pretrained(name, config = config, ignore_mismatched_sizes = True)
File "/usr/local/lib/python3.7/dist-packages/transformers/models/auto/auto_factory.py", line 419, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_utils.py", line 1429, in from_pretrained
_fast_init=_fast_init,
File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_utils.py", line 1576, in _load_state_dict_into_model
raise RuntimeError(f"Error(s) in loading state_dict for {model.__class__.__name__}:\n\t{error_msg}")
RuntimeError: Error(s) in loading state_dict for BertModel:
size mismatch for bert.embeddings.token_type_embeddings.weight: copying a param with shape torch.Size([2, 768]) from checkpoint, the shape in current model is torch.Size([5, 768]).
```
It seems ignore_mismatched_sizes is not worked properly. When debugging the codes, I found in class function `load_state_dict_info_model`, the `model_key` is not generated correctly. (e.g. it should be `embeddings.word_embeddings.weight` but `bert.bert.embeddings.word_embeddings.weight` instead).
https://github.com/huggingface/transformers/blob/3fefa292c1c419f0c4c3e2697cdd94cafaeb4b66/src/transformers/modeling_utils.py#L1516
https://github.com/huggingface/transformers/blob/3fefa292c1c419f0c4c3e2697cdd94cafaeb4b66/src/transformers/modeling_utils.py#L1518
I tried to swap above two lines and the code works fine. Is it a bug, and it's the proper way to fix?
| 10-20-2021 05:36:23 | 10-20-2021 05:36:23 | As seen on the PR linked above, this may fix the issue for a model with no task-specific head, but it will fail for others, so the fix is a bit more convoluted, probably.
**Edit:** I dived into the code a bit more (that function is so confusing!) and it is indeed the right order.<|||||>Thank you for your reply, now I just generate the original model and replace `model.embeddings.token_type_embeddings` by a new `nn.Embedding`, and it seems worked in my case. I'm looking forward to having this bug fixed.<|||||>This is not worked for Blip2ForConditionalGeneration
I changed the size of the vocab in the configuration file from 50272 to 21128 and set ignore_mismatched_sizes=True, but raise a error as follow:
@sgugger
I have no idea. please help!<|||||>It seems that there are two model parameter files, which are loaded in batches, whereas model_state_dict loads them all, so KeyError occurs. This is a bug. I hope we have time to revise it @sgugger<|||||>cc @younesbelkada <|||||>Will definitely look into it
@lixinliu1995 can you provide a reproducible script? Thanks!<|||||>I meet the same problem. |
transformers | 14,072 | closed | replace assert with exception in src/transformers/utils/model_pararallel_utils.py | # What does this PR do?
<!--
-->
The PR replaces the assertions in src/transformers/utils/model_pararallel_utils.py with ValueError exceptions . I remain the error messages in the original code. @sgugger
<!-- Remove if not applicable -->
Fixes # (#12789)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 10-20-2021 03:41:56 | 10-20-2021 03:41:56 | Thanks a lot! |
transformers | 14,071 | closed | Trainer._load_rng_state() path fix (#14069) | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #14069
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-20-2021 01:12:15 | 10-20-2021 01:12:15 | |
transformers | 14,070 | closed | AttributeError: 'BertTokenizer' object has no attribute 'encode_plus' | Hello,
I installed and unistalled pytorch_pretrained_bert package couple of time via pip install and than by .whl files. But its always giving me this error.
Thanks | 10-19-2021 23:18:35 | 10-19-2021 23:18:35 | Hello! You should install `transformers`, not `pytorch_pretrained_bert` |
transformers | 14,069 | closed | Trainer._load_rng_state() misbehavior | Resuming from checkpoints have been giving me a warning I did not expect.
```
Didn't find an RNG file, if you are resuming a training that was launched in a distributed fashion, reproducibility is not guaranteed.
```
even though I do see a `rng_state.pth` file in the checkpoint directory. A little digging turned up
https://github.com/huggingface/transformers/blob/3892d09f4f55607399ac6b9df14dbe9b3e92a3f7/src/transformers/trainer.py#L1513-L1514
which seems to prepend the checkpoint path to the filename twice for the `isfile()` check. I think this was a typo.
| 10-19-2021 21:16:12 | 10-19-2021 21:16:12 | Oh indeed! Do you want to make a PR to fix it, since you found the problem?<|||||>If that PR doesn't look right, then feel free to post another and I can close mine out. Thanks! |
transformers | 14,068 | closed | Which is better ? | I want to train a GPT-J-6B model on Arabic dataset but i am facing an important question.
Should i train the model from scratch or fine tune it on the new language β¦ which is better ? | 10-19-2021 20:34:35 | 10-19-2021 20:34:35 | You should use the [forums](https://discuss.huggingface.co/) to discuss this, as we keep issues for bugs and feature requests only.<|||||>@MohamedAliRashad did you ask it on the forum?
If yes, can you provide the link?
thanks.<|||||>@srulikbd
Found similar questions not answered or recommended the training from scratch part but knew it is impossible because it requires huge amount of computation. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,067 | closed | Add ASR colabs | # What does this PR do?
Adds ASR colabs to the READMEs.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-19-2021 18:24:17 | 10-19-2021 18:24:17 | Wait until those PRs are merged:
- https://github.com/huggingface/notebooks/pull/98
- https://github.com/huggingface/notebooks/pull/97<|||||>cc @anton-l - feel free to go into the PR to add AudioClassification right away |
transformers | 14,066 | closed | Add QDQBert model and quantization examples of SQUAD task | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
This PR includes:
1. Add support of Q/DQ BERT model based on HF BERT model.
(**src/transformers/models/qdqbert/**)
QDQBERT model add fake quantization operations (pair of QuantizeLinear/DequantizeLinear ops) to:
- linear layer inputs and weights
- matmul inputs
- residual add inputs
in BERT model.
QDQBERT model will be able to load from any checkpoint of HF BERT model, and perform Quantization Aware Training/Post Training Quantization with the support from [PyTorch-Quantization toolkit](https://github.com/NVIDIA/TensorRT/tree/master/tools/pytorch-quantization).
2. Add an example of SQUAD tasks finetuned by the QDQBERT model and inferenced by TensorRT
(**transformers/examples/research_projects/quantization-qdqbert/**)
In the example, we use qdqbert model to do Quantization Aware Training from pretrained HF BERT model on SQUAD task. Then [TensorRT](https://github.com/NVIDIA/TensorRT) can run the inference of the generated ONNX model for optimal INT8 performance out-of-the-box.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
A related discussion on this topic [Issue 10639](https://github.com/huggingface/transformers/issues/10639)
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-19-2021 17:15:45 | 10-19-2021 17:15:45 | @LysandreJik @sgugger Thanks!<|||||>Some CIs failed since QDQBERT model needs the dependency of Pytorch Quantization Toolkit (https://github.com/NVIDIA/TensorRT/tree/master/tools/pytorch-quantization). This dependency is good to go with simple one-line installation as:
`pip install pytorch-quantization --extra-index-url https://pypi.ngc.nvidia.com
`
I'm thinking of either adding the one line installation change to CI or adding quantization toolkit installation to transformers installation (or any other suggestions which are smooth and neat for the HF community) if we want to upstream the model. @LysandreJik @sgugger
Thanks!<|||||>As for the CI failure of check_code_quality, `import pycuda.autoinit` is needed, even if not used, so to initialize CUDA environment. Any suggestions to resolve this?
For the other two check failures, I'm not super sure about what is the root cause. Glad to get insights about how to fix that.<|||||>Thanks for working on this! It seems the code quality is not yet passing, could you run the quality scripts? You can do so with the following, from the root of your clone:
```
pip install -e ".[quality]"
make fixup
```<|||||>> Thanks for working on this! It seems the code quality is not yet passing, could you run the quality scripts? You can do so with the following, from the root of your clone:
>
> ```
> pip install -e ".[quality]"
> make fixup
> ```
Thanks for the comments! This is actually somewhere I want to check.
The code quality failure is from the TensorRT inference script `import pycuda.autoinit`. This line of code is needed, but not used, to initialize CUDA environment. Is there a way that I can keep this line of code in the script and pass the code quality test? @patrickvonplaten @LysandreJik @sgugger <|||||>Think there is just one small clean-up left to do:
```
examples/research_projects/quantization-qdqbert/evaluate-hf-trt-qa.py:29:1: F401 'pycuda.autoinit' imported but unused
```<|||||>Think there is another line to clean up :-) `examples/research_projects/quantization-qdqbert/evaluate-hf-trt-qa.py:29:1: F401 'pycuda.autoinit' imported but unused`<|||||>> Think there is another line to clean up :-) `examples/research_projects/quantization-qdqbert/evaluate-hf-trt-qa.py:29:1: F401 'pycuda.autoinit' imported but unused`
@patrickvonplaten Is there a workaround for it? The `pycuda.autoinit` is imported for cuda environment setup so it is needed in the script. Thanks!<|||||>You can add a a comment at the end of the import line `# noqa: F401` to have it be ignored by our styler. To check locally if the test will pass or not, just run `make quality`.
Note that with the merge of #14431, you will need to rebase your PR on master and replace the lines
```
self.init_weights()
```
by
```
# Initialize weights and apply final processing
self.post_init()
```
Let us know if you need any help!<|||||>Rebase the PR but not sure why there is the model templates runner CI failure now.<|||||>Thanks again for all your work on this! |
transformers | 14,065 | closed | Unexpected generated token probabilities derived from scores | ## Environment info
- `transformers` version: 4.11.3
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyTorch version (GPU?): 1.9.0+cu111 (False)
- Tensorflow version (GPU?): 2.6.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten
## Information
I'm generating tokens from the GPT2 model. I'm trying to get the the log probability of each generated token conditional on the previous generated tokens, i.e., the same numbers generated by [logprobs](https://beta.openai.com/docs/api-reference/completions/create#completions/create-logprobs) in the OpenAI API for the sampled tokens.
Here is the code I'm using to get the "scores" for the generation:
```python
from transformers import AutoModelForCausalLM
from transformers import AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("gpt2", return_dict_in_generate=True)
tokenizer = AutoTokenizer.from_pretrained("gpt2")
input_ids = tokenizer("Today is a nice day", return_tensors="pt").input_ids
generated_outputs = model.generate(input_ids, num_beams=2, max_length=8, output_scores=True)
```
The documentation for scores for BeamSearchDecoderOnlyOutput is the following:
"scores (tuple(torch.FloatTensor) optional, returned when output_scores=True is passed or when config.output_scores=True) β Processed beam scores for each vocabulary token at each generation step. Beam scores consisting of log softmax scores for each vocabulary token and sum of log softmax of previously generated tokens in this beam . (max_length-input_ids.shape[-1],)-shaped tuple of torch.FloatTensor with each tensor of shape (batch_size * num_beams * num_return_sequences, config.vocab_size))." (https://huggingface.co/transformers/internal/generation_utils.html#beamsearchoutput)
And there is a discussion post that says the following about scores:
"So in the case of beam_search, the scores correspond to the log probability of all words + the log probability all previous scores in your beam. So regarding the image this means that scores[0][0] will correspond to the log probabilities of all possible words in the vocabulary, so assuming your vocab would only consist of dog, nice, car and the probs are the same as in the diagram, the values would correspond to log(0.4), log(0.5), log(0.1). Scores[1][0] then corresponds to the chosen word of time step one (e.g. dog) and all possible values again, se: log(0.4) + log(0.05), log(0.4) + log(0.05), log(0.4) + log(0.9) using the diagram above again." (https://discuss.huggingface.co/t/showing-individual-token-and-corresponding-score-during-beam-search/3735/3)
Using that documentation, I try to get the probabilities, but end up with probabilities outside of [0, 1]:
```python
import numpy as np
gen_ids = generated_outputs["sequences"][0, input_ids.shape[-1]:]
logprob_beam0_gen_token0 = generated_outputs["scores"][0][0, gen_ids[0]].tolist()
logprob_beam0_gen_token1 = generated_outputs["scores"][1][0, gen_ids[1]].tolist() - logprob_beam0_gen_token0
logprob_beam0_gen_token2 = generated_outputs["scores"][2][0, gen_ids[2]].tolist() - logprob_beam0_gen_token0 - logprob_beam0_gen_token1
probs = np.exp([logprob_beam0_gen_token0, logprob_beam0_gen_token1, logprob_beam0_gen_token2])
# (0.15271044542780696, 2.6683572850297056e-05, 824.6099669421758)
probs[0], probs[1], probs[2]
```
## To reproduce
See this colab:
https://colab.research.google.com/drive/11rRAFuNycLLDiDDwU02mBgXjpBOXCe4P#scrollTo=WwIzz1CQYfgU
## Expected behavior
I would expect the probabilities to be between 0 and 1. | 10-19-2021 17:05:34 | 10-19-2021 17:05:34 | I use a code snippet to explain this, feel free to tell me if I'm wrong:
```
import torch
from transformers import AutoModelForCausalLM
from transformers import AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("gpt2", return_dict_in_generate=True)
tokenizer = AutoTokenizer.from_pretrained("gpt2")
input_ids = tokenizer("Today is a nice day", return_tensors="pt").input_ids
generated_outputs = model.generate(input_ids, num_beams=2, max_length=8, output_scores=True)
gen_ids = generated_outputs["sequences"][0, input_ids.shape[-1]:]
vocab_size = generated_outputs["scores"][0].shape[-1]
print(gen_ids) # tensor([ 11, 475, 314])
# Here we find out at each time-step, which beam each generated id belongs to.
values, indices = torch.topk(generated_outputs["scores"][0].view(-1), k=2)
print(values, (indices % vocab_size), (indices / vocab_size).long()) # tensor([-1.5148, -1.8792]) tensor([329, 11]) tensor([0, 0])
values, indices = torch.topk(generated_outputs["scores"][1].view(-1), k=2)
print(values, (indices % vocab_size), (indices / vocab_size).long()) # tensor([-3.6481, -3.9508]) tensor([475, 262]) tensor([1, 0])
values, indices = torch.topk(generated_outputs["scores"][2].view(-1), k=2)
print(values, (indices % vocab_size), (indices / vocab_size).long()) # tensor([-5.6957, -5.8748]) tensor([314, 340]) tensor([0, 0])
# So we know at time-step 1, the token id `475` belongs to beam 1, others belong to beam 0.
logprob_beam0_gen_token0 = generated_outputs["scores"][0][0, gen_ids[0]]
logprob_beam0_gen_token1 = generated_outputs["scores"][1][1, gen_ids[1]] - logprob_beam0_gen_token0
logprob_beam0_gen_token2 = generated_outputs["scores"][2][0, gen_ids[2]] - logprob_beam0_gen_token0 - logprob_beam0_gen_token1
print(logprob_beam0_gen_token0.exp(), logprob_beam0_gen_token1.exp(), logprob_beam0_gen_token2.exp())
# Outputs:
# tensor(0.1527) tensor(0.1705) tensor(0.1290)
```<|||||>Got it. Thanks!<|||||>See: https://discuss.huggingface.co/t/generation-probabilities-how-to-compute-probabilities-of-output-scores-for-gpt2/3175/15?u=patrickvonplaten
I'll improve the behavior for beam search<|||||>https://github.com/huggingface/transformers/pull/14654 |
transformers | 14,064 | closed | update to_py_obj to support np.number | # What does this PR do?
Update `to_py_obj` function to make it support np.number type.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # See an example below:
>>> tokenizer.decode(110)
'no'
>>> tokenizer.decode(np.array([110])[0])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/prettymeng/anaconda3/envs/ptuning/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 3179, in decode
return self._decode(
File "/Users/prettymeng/anaconda3/envs/ptuning/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 527, in _decode
text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens)
TypeError: Can't convert 110 to Sequence
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-19-2021 16:34:42 | 10-19-2021 16:34:42 | @sgugger Hi! This is an reopened PR for #14041, could you please check whether the failed tests are related to my PR?<|||||>All good, thanks a lot for your contribution! |
transformers | 14,063 | closed | [WIP] Tail free sampling implementation | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Implements tail free sampling as requested by feature request issue #13784. This pull request currently only contains a PyTorch implementation of tail free sampling because I only know PyTorch and not TensorFlow or Flax and I would not like to waste my time learning those if people do not like this pull request. I can create those implementations on demand.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-19-2021 14:52:16 | 10-19-2021 14:52:16 | Thanks for your PR @vfbd!
Could you maybe give a couple of examples using "GPT2" where tail free sampling works better than `top_k` and `top_p`? :-)<|||||>I do not have a concrete text example at the moment, but I do like to weigh in on the matter.
This request is desired by the KoboldAI community after being implemented by community request using finetune's fork as the basis. Users have reported to us that it works preferably for them on longer going stories. While I no longer have the output I also had an instance where switching to TFS is beneficial to the story quality when a GPT model is drawing in undesirable elements such as training data unrelated to the story. Having the ability to switch over can nudge the AI in a more preferable interpretation where you can continue a story that otherwise would have gotten stuck.
Because its an existing community currently relying on forks to facilitate all the features that they have gotten accustomed to and enjoy it would help us a lot if we can give them the experiences and flexibility they desire without having to rely on downstream forks and their maintainers to keep compatibility.
For now our users either run on Finetuneanon's fork, or on the fork related to this pull request so that their features work as expected. We'd like to migrate to the upstream versions and the lack of TFS is currently the main blocker.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,062 | closed | Put `load_image` function in `image_utils.py` & fix image rotation issue | # What does this PR do?
1. Put `load_image` function in `image_utils.py`
2. Encountered and fixing `PIL.Image.open is rotating jpeg images` https://github.com/python-pillow/Pillow/issues/4703
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-19-2021 12:46:31 | 10-19-2021 12:46:31 | Which version of Pillow are you using ? transformers doesn't specify a version.
> Pillow 8.0.0 is automatically rotating images upon .open based on EXIF Orientation tag.
Also probably a good time to remove code duplication here.<|||||>```
>>> PIL.__version__
'8.3.1'
```
also, I get rotated mask results from Inference API, which is how I encountered this issue<|||||>I think we need to add some form of test to expose the necessity for this as it's not trivial and might be fixed upstream (as it claims to be).
So we need an image file (can be in `hf-internal-testing` with said exif data) and show that the loading is improper (this test can be entirely separated from the pipeline, here we're testing the `load_image` method, so we can have a separate class of test for this that focuses on specifically this feature.<|||||>@narsil Should I use this opportunity to address ([src](https://github.com/huggingface/transformers/pull/13828#discussion_r721202947)):
> It might be time to create a utils file and put it in there maybe so we can just reuse this code all the time and test it separately. (it should be another PR, just mentioning it here)
If so, should I just make `load_image` a method of [ImageFeatureExtractionMixin](https://github.com/huggingface/transformers/blob/83e5a10603ca902c266e40fc98a01dd8a9b04ac4/src/transformers/image_utils.py#L33-L36) & make Vision pipelines inherit `ImageFeatureExtractionMixin`?<|||||>Just put the function outside the class, it's a simple function it doesn't deserve a mixin.
In another branch I added `src/transformers/audio_utils.py` to host `ffmpeg_read` for instance.
(It's the first thing that popped in mind in terms of location, maybe we can find a better location ultimately, here goal is just to reduce repetition)<|||||>@Narsil please feel free to re-review this PR. I've added the changes as suggested.
Specifically, this test case
https://github.com/huggingface/transformers/blob/dd6d891d90bf789b18f45e901d52d163d7f822a3/tests/test_image_utils.py#L436-L442<|||||>Before merging, there seems to be a few tests failing linked to your PR!<|||||>@LysandreJik thanks for notifying about the failing tests. Solved! There was an issue with the `hf-internal-testing/fixtures_image_utils` I've created |
transformers | 14,061 | closed | Replace assertions with ValueError exceptions | # What does this PR do?
Replaces the assertions in generation_utils.py with ValueError exceptions. I added an error message for the first one that did not exist before, feel free to change it.
Contributes towards fixing issue #12789
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 10-19-2021 12:08:52 | 10-19-2021 12:08:52 | |
transformers | 14,060 | closed | [WIP] Verify PT <-> Flax equivalence tests tolerance value | # What does this PR do?
Find why we need 4e-2 for some equivalence tests and see what could be done. | 10-19-2021 11:20:53 | 10-19-2021 11:20:53 | |
transformers | 14,059 | closed | Add Camembert to models exportable with ONNX | # What does this PR do?
I added lines to make Camembert models available for Onnx conversion.
# Issue
This is linked to this conversation #13952
@LysandreJik
| 10-19-2021 08:01:45 | 10-19-2021 08:01:45 | I don't understand why `run_tests_torch` are failing because it was ok before. It seems that it doesn't come from my PR. |
transformers | 14,058 | closed | with op Tile must be a compile-time constant. | TPU tensorflow2.6.0
TFLongTransformer
model.fit(
train_data, tf.gather(train_label, train_idx),
validation_data=(val_data, tf.gather(train_label, val_idx)),
epochs=10,
batch_size=BATCH_SIZE,
callbacks=[checkpoint_callback],
)
---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
<ipython-input-51-69b9068b95d6> in <module>()
21 epochs=10,
22 batch_size=BATCH_SIZE,
---> 23 callbacks=[checkpoint_callback],
24 )
25 # model.load_weights('/content/ckpt_'+str(fold)+'_model.h5')
13 frames
/usr/local/lib/python3.7/dist-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
1187 logs = tmp_logs # No error, now safe to assign to logs.
1188 end_step = step + data_handler.step_increment
-> 1189 callbacks.on_train_batch_end(end_step, logs)
1190 if self.stop_training:
1191 break
/usr/local/lib/python3.7/dist-packages/keras/callbacks.py in on_train_batch_end(self, batch, logs)
433 """
434 if self._should_call_train_batch_hooks:
--> 435 self._call_batch_hook(ModeKeys.TRAIN, 'end', batch, logs=logs)
436
437 def on_test_batch_begin(self, batch, logs=None):
/usr/local/lib/python3.7/dist-packages/keras/callbacks.py in _call_batch_hook(self, mode, hook, batch, logs)
293 self._call_batch_begin_hook(mode, batch, logs)
294 elif hook == 'end':
--> 295 self._call_batch_end_hook(mode, batch, logs)
296 else:
297 raise ValueError('Unrecognized hook: {}'.format(hook))
/usr/local/lib/python3.7/dist-packages/keras/callbacks.py in _call_batch_end_hook(self, mode, batch, logs)
313 self._batch_times.append(batch_time)
314
--> 315 self._call_batch_hook_helper(hook_name, batch, logs)
316
317 if len(self._batch_times) >= self._num_batches_for_timing_check:
/usr/local/lib/python3.7/dist-packages/keras/callbacks.py in _call_batch_hook_helper(self, hook_name, batch, logs)
351 for callback in self.callbacks:
352 hook = getattr(callback, hook_name)
--> 353 hook(batch, logs)
354
355 if self._check_timing:
/usr/local/lib/python3.7/dist-packages/keras/callbacks.py in on_train_batch_end(self, batch, logs)
1026
1027 def on_train_batch_end(self, batch, logs=None):
-> 1028 self._batch_update_progbar(batch, logs)
1029
1030 def on_test_batch_end(self, batch, logs=None):
/usr/local/lib/python3.7/dist-packages/keras/callbacks.py in _batch_update_progbar(self, batch, logs)
1098 if self.verbose == 1:
1099 # Only block async when verbose = 1.
-> 1100 logs = tf_utils.sync_to_numpy_or_python_type(logs)
1101 self.progbar.update(self.seen, list(logs.items()), finalize=False)
1102
/usr/local/lib/python3.7/dist-packages/keras/utils/tf_utils.py in sync_to_numpy_or_python_type(tensors)
514 return t # Don't turn ragged or sparse tensors to NumPy.
515
--> 516 return tf.nest.map_structure(_to_single_numpy_or_python_type, tensors)
517
518
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/nest.py in map_structure(func, *structure, **kwargs)
867
868 return pack_sequence_as(
--> 869 structure[0], [func(*x) for x in entries],
870 expand_composites=expand_composites)
871
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/nest.py in <listcomp>(.0)
867
868 return pack_sequence_as(
--> 869 structure[0], [func(*x) for x in entries],
870 expand_composites=expand_composites)
871
/usr/local/lib/python3.7/dist-packages/keras/utils/tf_utils.py in _to_single_numpy_or_python_type(t)
510 def _to_single_numpy_or_python_type(t):
511 if isinstance(t, tf.Tensor):
--> 512 x = t.numpy()
513 return x.item() if np.ndim(x) == 0 else x
514 return t # Don't turn ragged or sparse tensors to NumPy.
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py in numpy(self)
1092 """
1093 # TODO(slebedev): Consider avoiding a copy for non-CPU or remote tensors.
-> 1094 maybe_arr = self._numpy() # pylint: disable=protected-access
1095 return maybe_arr.copy() if isinstance(maybe_arr, np.ndarray) else maybe_arr
1096
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py in _numpy(self)
1060 return self._numpy_internal()
1061 except core._NotOkStatusException as e: # pylint: disable=protected-access
-> 1062 six.raise_from(core._status_to_exception(e.code, e.message), None) # pylint: disable=protected-access
1063
1064 @property
/usr/local/lib/python3.7/dist-packages/six.py in raise_from(value, from_value)
InvalidArgumentError: 9 root error(s) found.
(0) Invalid argument: {{function_node __inference_train_function_367359}} Input 1 to node `model/tf_longformer_model/longformer/encoder/layer_._0/attention/self/cond_1/Tile` with op Tile must be a compile-time constant.
XLA compilation requires that operator arguments that represent shapes or dimensions be evaluated to concrete values at compile time. This error means that a shape or dimension argument could not be evaluated at compile time, usually because the value of the argument depends on a parameter to the computation, on a variable, or on a stateful operation such as a random number generator.
[[{{node model/tf_longformer_model/longformer/encoder/layer_._0/attention/self/cond_1/Tile}}]]
[[model/tf_longformer_model/longformer/encoder/layer_._0/attention/self/cond_1]]
[[TPUReplicate/_compile/_13050048924464197292/_9]]
[[TPUReplicate/_compile/_13050048924464197292/_9/_438]]
(1) Invalid argument: {{function_node __inference_train_function_367359}} Input 1 to node `model/tf_longformer_model/longformer/encoder/layer_._0/attention/self/cond_1/Tile` with op Tile must be a compile-time constant.
XLA compilation requires that operator arguments that represent shapes or dimensions be evaluated to concrete values at compile time. This error means that a shape or dimension argument could not be evaluated at compile time, usually because the value of the argument depends on a parameter to the computation, on a variable, or on a stateful operation such as a random number generator.
[[{{node model/tf_longformer_model/longformer/encoder/layer_._0/attention/self/cond_1/Tile}}]]
[[model/tf_longformer_model/longformer/encoder/layer_._0/attention/self/cond_1]]
[[TPUReplicate/_compile/_13050048924464197292/_9]]
[[tpu_compile_succeeded_assert/_1014831673988967065/_10/_397]]
(2) Invalid argument: {{function_node __inference_train_function_367359}} Input 1 to node `model/tf_longformer_model/longformer/encoder/layer_._0/attention/self/cond_1/Tile` with op Tile must be a compile-time constant.
XLA compilation requires that operator arguments that represent shapes or dimensions be evaluated to concrete values at compile time. This error means that a shape or dimension argument could not be evaluated at compile time, usually because the value of the argument depends on a parameter to the computation, on a variable, or on a stateful operation such as a random number generator.
[[{{node model/tf_longformer_model/longformer/encoder/layer_._0/attention/self/cond_1/Tile}}]]
[[model/tf_longformer_model/longformer/encoder/layer_._0/attention/self/cond_1]]
[[TPUReplicate/_compile/_13050048924464197292/_9]]
[[TPUReplicate/_compile/_13050048924464197292/_9/_416]]
(3) Invalid argument: {{function_node __inference_train_function_367359}} Input 1 to node `model/tf_longformer_model/longformer/encoder/layer_._0/attention/self/cond_1/Tile` with op Tile must be a compile-time constant.
XLA compilation requires that operator arguments that represent shapes or dimensions be evaluated to concre ... [truncated]
| 10-19-2021 02:54:22 | 10-19-2021 02:54:22 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I have the same issue. I am guessing it's because of random attention mechanism of Longformer
<|||||>I have the same issue<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi there! Possibly late for this issue, but an issue with the same error message (on a different model) was fixed [here](https://github.com/huggingface/transformers/issues/18476).
The issue was fixed by setting fixed shapes in the input. |
transformers | 14,057 | closed | Add QDQBert model and QAT example of SQUAD task | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR includes:
1. Add support of Q/DQ BERT model based on HF BERT model.
(**src/transformers/models/qdqbert/**)
QDQBERT model add fake quantization operations (pair of QuantizeLinear/DequantizeLinear ops) to:
- linear layer inputs and weights
- matmul inputs
- residual add inputs
in BERT model.
QDQBERT model will be able to load from any checkpoint of HF BERT model, and perform Quantization Aware Training/Post Training Quantization with the support from [PyTorch-Quantization toolkit](https://github.com/NVIDIA/TensorRT/tree/master/tools/pytorch-quantization).
2. Add an example of SQUAD tasks finetuned by the QDQBERT model and inferenced by TensorRT
(**examples/pytorch/question-answering/QAT-qdqbert/**)
In the example, we use qdqbert model to do Quantization Aware Training from pretrained HF BERT model on SQUAD task. Then [TensorRT](https://github.com/NVIDIA/TensorRT) can run the inference of the generated ONNX model for optimal INT8 performance out-of-the-box.
Also added a module in (**examples/pytorch/question-answering/run_qa.py, trainer_qa.py**) for saving the SQUAD task specific BERT model as ONNX files, for a consistency check with QAT-qdqbert example.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
A related discussion on this topic [Issue 10639](https://github.com/huggingface/transformers/issues/10639)
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-18-2021 22:05:20 | 10-18-2021 22:05:20 | Hi, this PR includes both the support of QDQBert model and the QAT example of using QDQBert model for SQUAD task.
I'm not sure whether it is the right place to leave the QAT example at examples/pytorch/question-answering/QAT-qdqbert/, since this QAT example will be nicer to compare with regular BERT SQUAD task at examples/pytorch/question-answering/.
Comments are welcome for the QAT example and other parts as well. : ) @LysandreJik<|||||>
> Thanks for your PR. Note that it's hard to review because it include changes from other commits on master (bad rebase?) so it would be better if you could re-open a clean PR from your branch.
>
> Concerning the examples:
>
> 1. I don't think the QAT example should go in the examples maintained by the team, given it introduces a lot of new code no one on the team wrote and will be able to maintain properly. It should go in a research project.
> 2. The classic QA example should not be touched by this PR. In general any new functionality should be added to all examples at the same time, which could be done in a separate PR. It's also my understanding that the ONNX conversion won't work for many of the models, but maybe I'm wrong on this.
Thanks for the comments! I'm opening up a new PR here: https://github.com/huggingface/transformers/pull/14066
based on the latest master branch.
The QAT example now goes into transformers/examples/research_projects/qat-qdqbert/, and the classic QA examples are untouched. |
transformers | 14,056 | closed | Fix typo | # What does this PR do?
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-18-2021 15:34:46 | 10-18-2021 15:34:46 | |
transformers | 14,055 | closed | Fix label attribution in token classification examples | # What does this PR do?
Currently, when a word is split into tokens in the token classification examples, all tokens get the same label as the word. But if that label is a B-Xxx, only the first token should get it, the following ones should have an I-Xxx label. This PR addresses that.
Fixes #14043 | 10-18-2021 14:09:41 | 10-18-2021 14:09:41 | |
transformers | 14,054 | closed | Fix save when laod_best_model_at_end=True | # What does this PR do?
As pointed out on the [forums](https://discuss.huggingface.co/t/why-save-steps-should-be-a-round-multiple-of-eval-steps-when-load-best-model-at-end-true/10841/3), the `Trainer` keeps saving every `eval_steps` when `load_best_model_at_end=True` even though we now enforce that `save_steps` can be independent as long as it's a round multiple of `eval_steps`.
This PR fixes that. | 10-18-2021 13:46:10 | 10-18-2021 13:46:10 | |
transformers | 14,053 | closed | [torch.fx] Abstract dynamic tracer | # What does this PR do?
We hope to abstract the dynamic features of tracer from the `HFTracer` class. The idea is to leverage the new feature for non HF modules, for example having a torch like tracer that support the dynamic allocation mechanisms.
If this PR is approved, we can move inheritance so that HFTracer implements DynamicTracer.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@michaelbenayoun
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
| 10-18-2021 12:50:47 | 10-18-2021 12:50:47 | Closing as @michaelbenayoun is making substantial changes in fx. Will create the abstraction again after his PR is merged. |
transformers | 14,052 | closed | [Examples] Use Audio feature in speech classification | # What does this PR do?
This PR refactors the audio classification examples to use the new `Audio` feature of `datasets`
Same idea as the speech recognition examples refactoring: https://github.com/huggingface/transformers/pull/14027 | 10-18-2021 12:49:52 | 10-18-2021 12:49:52 | |
transformers | 14,051 | closed | In `Trainer`, `evaluation_strategy` defaults to `no`, but `save_strategy` defaults to `steps`. Why? | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.9.2
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.4
- PyTorch version (GPU?): 1.9.0+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@sgugger
## Information
I'm using `BERTForMaskedLM`, but it doesn't matter, because it's a general `Trainer` issue.
Perhaps `save_strategy` and `evaluation_strategy` should both have the same default value?
## To reproduce
Steps to reproduce the behavior:
1. In `Trainer`, set `load_best_model_at_end` to `True` without assigning any values to `save_strategy` or `evaluation_strategy`
2. Try to run training
Here's the error trace:
```
ValueError: --load_best_model_at_end requires the save and eval strategy to match, but found
- Evaluation strategy: IntervalStrategy.NO
- Save strategy: IntervalStrategy.STEPS
```
## Expected behavior
Since `load_best_model=True` requires `evaluation_strategy == save_strategy`, it seems reasonable to me that both `evaluation_strategy` and `save_strategy` should be set to `"no"` by default.
If I am wrong and the current behavior is the intended one, I'd like to understand why that's the case. :)
| 10-18-2021 12:49:40 | 10-18-2021 12:49:40 | The value of those default is mainly historical, and we can't change it as it would be a breaking change. So even if it does not necessarily make sense when using `--load_best_model_at_end` (which is exactly why that error message was added, to make sure you are not surprised after spending lot of compute in training), we are a bit stuck with it.<|||||>I encountered this issue when my libraries wasn't in the "right" versions,
I found that this issue is gone when the libraries are:
pyorch - 1.4
Adapter transformers - 2.0.0
(note that transformers and adapter-transformers has the same domain name, so make sure that you use adapter-transformers)
Another solution is to add the training parameters
evaluation_strategy = "no",
save_strategy = 'no'
where the values can be - ['no', 'steps', 'epoch'] |
transformers | 14,050 | closed | TypeError: Inputs to a layer should be tensors. Got: last_hidden_state | ```
def build_model():
auto_layer = TFLongformerModel.from_pretrained(PRE_TRAINED_MODEL_NAME, return_dict=False)
context_input_ids = Input(shape=(max_len,), dtype=tf.int32, name="context_input_ids")
context_attention_mask = Input(shape=(max_len,), dtype=tf.int32, name="context_attention_mask")
context_position_ids = Input(shape=(max_len,), dtype=tf.int32, name="context_position_ids")
context_sequence_output, context_clf_output = auto_layer(context_input_ids,
attention_mask=context_attention_mask,
token_type_ids=context_position_ids
)
reponse_input_ids = Input(shape=(max_len,), dtype=tf.int32, name="reponse_input_ids")
reponse_attention_mask = Input(shape=(max_len,), dtype=tf.int32, name="reponse_attention_mask")
reponse_position_ids = Input(shape=(max_len,), dtype=tf.int32, name="reponse_position_ids")
reponse_sequence_output, reponse_clf_output = auto_layer(reponse_input_ids,
attention_mask=reponse_attention_mask,
token_type_ids=reponse_position_ids
)
cnn_layer_response = Conv1D(
filters=256,
kernel_size=3,
activation='relu',
# Use 'same' padding so outputs have the same shape as inputs.
padding='same')
cnn_layer_context = Conv1D(
filters=256,
kernel_size=3,
activation='relu',
# Use 'same' padding so outputs have the same shape as inputs.
padding='same')
query_seq_encoding = cnn_layer_response(reponse_sequence_output)
value_seq_encoding = cnn_layer_context(context_sequence_output)
query_encoding = GlobalAveragePooling1D()(query_seq_encoding)
value_encoding = GlobalAveragePooling1D()(value_seq_encoding)
aoa_kv_clf_output = AOA()([query_seq_encoding, value_seq_encoding]) # AOA Attention
conc = concatenate([query_encoding, value_encoding, aoa_kv_clf_output])
conc = Dropout(0.2)(conc)
out = Dense(1, activation='sigmoid')(conc)
model = Model(inputs=[context_input_ids, context_attention_mask, context_position_ids, reponse_input_ids, reponse_attention_mask, reponse_position_ids], outputs=out)
return model
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-42-fd334eefe081> in <module>()
1 with strategy.scope():
----> 2 model = build_model()
3
4 METRICS = [
5 tf.keras.metrics.BinaryAccuracy(),
2 frames
<ipython-input-41-c3d20d15e326> in build_model()
50
51 # Query encoding of shape [batch_size, Tq, filters].
---> 52 query_seq_encoding = cnn_layer_response(reponse_sequence_output)
53 # Value encoding of shape [batch_size, Tv, filters].
54 value_seq_encoding = cnn_layer_context(context_sequence_output)
/usr/local/lib/python3.7/dist-packages/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
1018 training=training_mode):
1019
-> 1020 input_spec.assert_input_compatibility(self.input_spec, inputs, self.name)
1021 if eager:
1022 call_fn = self.call
/usr/local/lib/python3.7/dist-packages/keras/engine/input_spec.py in assert_input_compatibility(input_spec, inputs, layer_name)
194 # have a `shape` attribute.
195 if not hasattr(x, 'shape'):
--> 196 raise TypeError('Inputs to a layer should be tensors. Got: %s' % (x,))
197
198 if len(inputs) != len(input_spec):
TypeError: Inputs to a layer should be tensors. Got: last_hidden_state | 10-18-2021 12:26:21 | 10-18-2021 12:26:21 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,049 | closed | fix typo | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-18-2021 11:26:26 | 10-18-2021 11:26:26 | |
transformers | 14,048 | closed | Update SEW integration test tolerance | The tolerance was too low to consistently reproduce the results on different GPUs | 10-18-2021 10:58:44 | 10-18-2021 10:58:44 | |
transformers | 14,047 | closed | Add TF<>PT and Flax<>PT everywhere | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds the PT<>TF and PT<>Flax equivalence tests to the common PyTorch tests so that the equivalence tests are also fetched when just the PyTorch files are changed. The PR also uncovered a couple of smaller bugs in TFHuBERT and FlaxAlbert.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-18-2021 10:21:05 | 10-18-2021 10:21:05 | > Thanks for adding those. The only downside is that they will be run twice when we do the full suite, but I don't think it's a big issue.
Yes, but I think it's much more important to be sure that if one changes PyTorch's CLIP that both PT<>Flax and PT<>TF is run with the test fetcher.
Also, the tests are actually slighly different in a sense that the ones that I added:
- Use the dummy model config of the PyTorch test suite
- Use the model inputs created in the PyTorch tests
where as the previous tests used Flax's or TF's config and inputs. So I think having those tests also forces us to be consistent with the testing configs and inputs across frameworks |
transformers | 14,046 | closed | [Flax] Clip fix test | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Corrects failing tests on master
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-18-2021 10:14:32 | 10-18-2021 10:14:32 | #14047 should prevent tests failing to check those in the future<|||||>Merging as it's currently failing on master and affecting lots of PRs <|||||>Thanks a lot for fixing this @patrickvonplaten! |
transformers | 14,045 | closed | [Speech] Move all examples to new audio feature | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR makes sure that audio loading uses the new audio feature in all tests. At the same time the dataset `"hf-internal-testing/librispeech_asr_dummy"` is updated to make use of the new audio feature.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-18-2021 09:46:41 | 10-18-2021 09:46:41 | |
transformers | 14,044 | closed | Fixes typo in `modeling_speech_to_text` | # What does this PR do?
@sgugger @patrickvonplaten please let me know if the changed line is a bug or a valid documentation π
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-18-2021 07:57:49 | 10-18-2021 07:57:49 | |
transformers | 14,043 | closed | Running `run_ner_no_trainer.py` with `--label_all_tokens` falsifies seqeval results | ## Environment info
- `transformers` version: 4.11.3
- Platform: Linux-5.4.0-80-generic-x86_64-with-glibc2.17
- Python version: 3.8.11
- PyTorch version (GPU?): 1.9.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
## Who can help
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
## Description
Hi π€ team, not really an bug report, as the code is doing what it should, but not strictly a feature request either.
When running `run_ner_no_trainer.py` with `--label_all_tokens`, evaluating with `seqeval` (as default in the script) completely fudges the results. This is due to the fact that `seqeval` is entity based, which means that each sub-token in a "B-" labelled word will be considered as a single entity.
Example :
Before tokenisation:
```
O B-PERS I-PERS I-PERS
The Australian Prime minister
```
-> Number of entities: 1
After tokenisation with `label_all_tokens`:
```
O B-PERS B-PERS I-PERS I-PERS
The Austral ##ian Prime minister
```
-> Number of entities: 2
This can yield pretty important differences which may easily be overseen. I think it is worth adding a warning when using the script with this configuration. | 10-18-2021 07:42:40 | 10-18-2021 07:42:40 | Ah yes, we should convert the B- to an I-, which we do in other examples. |
transformers | 14,042 | closed | Bug in the Flaubert tokenizer_config.json do_lowercase option | ## Environment info
- `transformers` version: 4.3.2
- Platform: Linux-5.4.0-88-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.9.0a0+df837d0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik @thomwolf @patrickvonplaten
## Information
Model I am using: FlauBERT
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ X ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ X ] my own task or dataset: (give details below)
## Report
Hi,
This is a bug report for the Flaubert tokenizer. (It has originally been posted [on the huggingface forum](https://discuss.huggingface.co/t/bug-in-the-flaubert-tokenizer-config-json-do-lowercase-option/10084), but I guess, it was a wrong place)
The `tokenizer_config.json` of all models of the Flaubert model repo, for example: [here](https://huggingface.co/flaubert/flaubert_base_uncased/blob/main/tokenizer_config.json) has the wrong option name:
```
{
"do_lower_case": true
}
```
while it should read `do_lowercase`, as expected by the [FlaubertTokenizer.](https://huggingface.co/transformers/model_doc/flaubert.html?highlight=flauberttokenizer#transformers.FlaubertTokenizer) This results in all Flaubert models having case-sensitive tokenizers.
In my project (`flaubert-base-uncased` model) the bug first manifested itself in transformers v.4.4.0. Previous versions of transformers somehow didnβt download this file, and after the version upgrade, I noticed that my network behaves differently from before. It may be related to Pull Request #10624, but Iβm not at all sure here and it probably doesnβt really matter.
Thanks for correcting the bug and many thanks for the great library.
## To reproduce
1. Download the default `flaubert-base-uncased` model from the official repo, fix the seed, start the training and print out something (train loss, some weights, etc).
2. Then in the same downloaded model, modify the `"do_lower_case": false` and restart the training. Make sure that the printed value is exactly the same.
3. Then still in the same downloaded model, modify the option to `"do_lowercase": false`, restart the training and once again make sure that the printed value is the same.
Basically, those three trainings are the same because in the first two cases the network uses the default value of `"do_lowercase": false`, while in the third one we explicitly select the option.
4. Finally, in the very same model, set `"do_lowercase": true`, launch the training and check that now the training is different, and that this is indeed the correct option name, which controls the upper/lower case of the model.
## Expected behavior
The above tests would prove that the correct option, that controls the tokenizer is `do_lowercase`.
| 10-18-2021 07:40:03 | 10-18-2021 07:40:03 | Hello, I can indeed reproduce. Thank you for opening an issue, I've updated the files on the hub. Let me know if your issue is fixed!<|||||>Hello,
Thank you very much for the quick fix!
Yes, the issue on my side is fixed, and I believe, everything should work correctly now.
I would still suggest applying the same fix to the other 3 cased Flaubert models for consistency. And also removing the incorrect `do_lower_case` option entirely to avoid confusion. But I admit that this is your design choice as those two fixes won't change the behaviour directly.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,041 | closed | update to_py_obj to support np.int type | # What does this PR do?
Update `to_py_obj` function to make it support np.int type.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # See an example below:
>>> tokenizer.decode(110)
'no'
>>> tokenizer.decode(np.array([110])[0])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/prettymeng/anaconda3/envs/ptuning/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 3179, in decode
return self._decode(
File "/Users/prettymeng/anaconda3/envs/ptuning/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 527, in _decode
text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens)
TypeError: Can't convert 110 to Sequence
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-18-2021 05:39:48 | 10-18-2021 05:39:48 | > Thanks for your PR! Your check can be improved to use all numpy numeric types. Also, a comment would be welcome as I though `tolist()` would fail on those 0d arrays when it actually succeeds (I though we would need an `item()` for those).
Thanks for the review! Any idea about why this still fails in the test?
Also, I don't quite understand "a comment would be welcome as I though `tolist()` would fail on those 0d arrays when it actually succeeds (I though we would need an `item()` for those)".
Do you mean `tolist()` will fail on 0d arrays?
On my own machine, it should work:
>>> np.array(1).tolist()
1<|||||>No it does work, but it is surprising as I would not have expected it (since a 0d array is not a "listy" thing). Hence the comment so that when someone else stumbles on this, they are not surprised.
The first failure comes from the code style, run `make style` on your branch to solve it.
The second failure is unrelated to this PR, you can ignore it.<|||||>Mmm, it looks like the `make style` command did not properly format. Make sure you do
```
pip install -e .[quality]
```
to have the same versions of all the libraries we use.<|||||>> Mmm, it looks like the `make style` command did not properly format. Make sure you do
>
> ```
> pip install -e .[quality]
> ```
>
> to have the same versions of all the libraries we use.
I see. Thanks! It should work now.<|||||>Mmm, it looks like the PR is now changing 53 files. It may due to a rebase on master, so could you close this PR and open a fresh one from your branch? This way GitHub should only show the real diff. |
transformers | 14,040 | closed | [Speech] Refactor Examples | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adapts all Wav2Vec2-like models to use the same examples and makes sure that `...ForCTC` and `...ForSequenceClassification` have exactly the same structure. HuBERT, SEW, SEW-D and soon UniSpeech and others that are based on Wav2Vec2 usually don't have any specific heads, but rather should just follow the Wav2Vec2 head design of CTC and the Superb head design for SequenceClassification (SpeakerVerification, ....). Therefore we should make it as easy as possible to add such heads to new Wav2Vec2 versions.
This PR makes sure that a simple `#Copied from ...` command can be used for such heads which should allow us to work faster when adding new speech models while making sure the design is unified and correct.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-17-2021 21:01:44 | 10-17-2021 21:01:44 | Wait until https://github.com/huggingface/transformers/pull/14026#issuecomment-945201433 is fixed |
transformers | 14,039 | closed | Enabling Discussion on github | # π Feature request
Enabling Discussion on github
## Motivation
reserve issues for only feature request, bugs (anything that would be an addition to the transformers library). While, discussion will be for guidance and questions.
I know that this is the use of [discuss huggingface](https://discuss.huggingface.co/c/transformers/9). However, I think large portion of users interact just from github, it makes sense to have issues not include general question and guidance (**false issues**) and you as transformers maintainers will relate to the amount of notifications from it.
| 10-17-2021 10:47:30 | 10-17-2021 10:47:30 | Good idea!!<|||||>Hello @sadakmed, thank you for your proposal! As you have mentioned, we have a forum that is already active. Rather than opening a new communication channel that would split interaction between the forum and the GitHub discussions, we'd rather stay focused on having only the forum.
Does that make sense?
cc @sgugger <|||||>Yes, we do not want to duplicate the communication channels and I think it would just be too confusing for everyone to have the same discussions in two different places. The forums via Discourse were chosen as they are ultimately better integrated with the rest of the Hugging Face website and model Hub (same login, easy going from one to the other) whereas the Discussions on GitHub did not offer that feature.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,038 | closed | Add cross attentions to TFGPT2Model | # What does this PR do?
Add cross attention to TFGPT2.
This was previously done in #13222, but we decided to move this to a new PR.
I also added `TFGPT2EncoderDecoderModelTest` with `test_bert2gpt2_summarization`. | 10-17-2021 10:14:38 | 10-17-2021 10:14:38 | Just correct the line about
```
"""Not working, because pt checkpoint has `encoder.encoder.layer...` while tf model has `encoder.bert.layer...`.
```
For tf here, it has `encoder.bert.encoder.layer...` instead.<|||||>Sorry, I tried fixing some merge conflicts, but I think this introduced a new error. @ydshieh could you maybe quickly go into the PR again and fix those last tests? :-) The PR looks good for me otherwise!<|||||>>
>
> Sorry, I tried fixing some merge conflicts, but I think this introduced a new error. @ydshieh could you maybe quickly go into the PR again and fix those last tests? :-) The PR looks good for me otherwise!
It's OK now :) |
transformers | 14,037 | closed | id2label is list but we need dict to load models properly from tf.saved_model | id2label is list and is consistent throughout pytorch and transformers the config.json file created through TFmodel.save_pretrained keeps it as list as well, the code above expected the id2label to be a dict obj but was list instead causing errors in loading saved models in tensorflow.
# What does this PR do?
<!--
prevented properly loading models from tensorflow
-->
<!-- Remove if not applicable -->
Fixes # (issue)
# id2label AttributeError : 'list' object has no attribute 'items' when loading saved model in TF
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
@LysandreJik
-->
| 10-17-2021 08:08:04 | 10-17-2021 08:08:04 | Hi @Abhishek-krg, I'm sorry for the slow reply. There are a couple of issues here:
1. The PR has a trailing ), so the code is invalid. Can you fix that issue so we can run tests?
2. The attached code is missing. Can you show some sample code that's failing without this PR?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,036 | closed | [Gradient checkpoining] Update Wav2Vec scripts | # What does this PR do?
This PR makes the Wav2Vec scripts compatible with the changes introduced in #13657 regarding the `gradient_checkpointing` feature/argument.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stas00, @LysandreJik, @patrickvonplaten | 10-16-2021 19:42:30 | 10-16-2021 19:42:30 | FYI, this is a duplicate of https://github.com/huggingface/transformers/pull/13964, but I think yours is better since mine doesn't change the flax example.<|||||>Hi and sorry for the duplicate. I checked for similar issues but forgot to search for PRs.
Besides the missed flax example, #13964 possibly runs into this warning: [`Passing gradient_checkpointing to a config initialization is deprecated and will be removed in v5 Transformers.`](https://github.com/huggingface/transformers/blob/cde0c750af2fae6848ed3ee8be381b4f1230ecd0/src/transformers/configuration_utils.py#L339)
The present PR follows the recommendation from [Performance#Gradient Checkpointing](https://github.com/huggingface/transformers/blob/cde0c750af2fae6848ed3ee8be381b4f1230ecd0/docs/source/performance.md#gradient-checkpointing). <|||||>No need to be sorry, I was just pointing to maintainers that there are 2 of the kind so it's easy to deal with them at once.
Further, https://github.com/huggingface/transformers/pull/13877 moved wav2vec2 to supported examples, but for some reason these examples didn't get ported.
<|||||>Well noted, the addition of `examples/pytorch/speech-pretraining` got me confused.
As I understood, [run_wav2vec2_pretraining_no_trainer.py](https://github.com/huggingface/transformers/blob/cde0c750af2fae6848ed3ee8be381b4f1230ecd0/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py) is equivalent to `examples/research_projects/wav2vec2/run_pretrain.py`, but uses `accelerate` instead of the `Trainer` API.
Nonetheless, there seems to be some duplicated work in, e.g., argument parsing, dataset setup, and model instantiation. It is also not clear whether the notes in `examples/pytorch/speech-pretraining` also apply to `examples/research_projects/wav2vec2/` (they probably do, so it would be nice to have them together).<|||||>@patrickvonplaten, we had 2 similar PRs. https://github.com/huggingface/transformers/pull/13964 got merged
and this one has one more file covered that mine didn't.
I rebased it to incorporate the changes from the other PR.<|||||>Thanks for updating the scripts! |
transformers | 14,035 | closed | How to untie input and output word embeddings (make embeddings independent) for Bart? | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform: Linux-5.8.0-40-generic-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyTorch version (GPU?): 1.8.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten, @patil-suraj
## Information
Model I am using (Bert, XLNet ...): BartForConditionalGeneration
I want to make "shared embeddings" non-trainabe but "lm-head" trainable but couldn't because of the way "lm-head" is initialised.
In bartβs modeling_utils.py there is the following piece of code. Now what this does is that BartForConditionalGenerationβs lm_head is tied to the token embeddings used by Bartβs enc/dec. This is wrong right? We just these two parameters to be initialised the same but not have the same value throughout.
Use of embeddings matrix-> this to convert sparse tokens at enc/dec input to dense embeddings
Use of lm_head -> this to convert dense embeddings at the decoder output side to a logits vector across tokens in the vocabulary.
Is there a reason they should be tied even when fine-tuning bart? Why is then this not the case when the "touch script" mode is on (as evident from the line ```output_embeddings.weight = nn.Parameter(input_embeddings.weight.clone())``` ) ?
```
def _tie_or_clone_weights(self, output_embeddings, input_embeddings):
"""Tie or clone module weights depending of whether we are using TorchScript or not"""
if self.config.torchscript:
output_embeddings.weight = nn.Parameter(input_embeddings.weight.clone())
else:
################################
### NOTE THE FOLLOWING LINE ###
################################
output_embeddings.weight = input_embeddings.weight
if getattr(output_embeddings, "bias", None) is not None:
output_embeddings.bias.data = torch.nn.functional.pad(
output_embeddings.bias.data,
(
0,
output_embeddings.weight.shape[0] - output_embeddings.bias.shape[0],
),
"constant",
0,
)
if hasattr(output_embeddings, "out_features") and hasattr(input_embeddings, "num_embeddings"):
output_embeddings.out_features = input_embeddings.num_embeddings
```
## Expected behavior
The behaviour in torchscript mode should be kept default. This is because only the initialised weights of lm-head and embeddings should be the same not throughout the training. | 10-16-2021 19:01:22 | 10-16-2021 19:01:22 | @patil-suraj any comment?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @baekg,
Sorry for replying so late here. By default BART's input and output word embeddings are tied (like it is the case for almost all models: BERT, RoBERTa, T5, ...). This means that the input and output word embeddings weights are identical and will stay identical during training. Each backward pass therefore does two gradient updates for one weight matrix. You can verify that easily by doing the following:
```python
from transformers import BartForConditionalGeneration
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large")
# change value in input word embedding
model.model.shared.weight.data[0, 0] = -10.0
# see how this has also changed the value in `lm_head`
model.lm_head.weight.data[0, 0]
```
Now by passing `tie_word_embeddings=False` to the model at initialization, you can "untie" the input word embeddings and `lm_head`. See:
```python
from transformers import BartForConditionalGeneration
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", tie_word_embeddings=False)
# change value in input word embedding
model.model.shared.weight.data[0, 0] = -10.0
# see how this has also changed the value in `lm_head`
model.lm_head.weight.data[0, 0]
```
This way you can make the input embeddings non-trainable (set `.requires_grad=False) without the `lm_head` being affected.
For `torchscript` the word embeddings are untied by default since the compilation doesn't word otherwise.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.