repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 14,942 | closed | when I use "convert_pytorch_checkpoint_to_tf", I meet some problems | I found that these args are not really useful, actually I can not run the code when use the arg—— "model_name"
the origin code is like this, which I find it not work:
` model = BertModel.from_pretrained(
pretrained_model_name_or_path=args.model_name,
state_dict=torch.load(args.pytorch_model_path),
cache_dir=args.cache_dir,
)
`
when I change the code as the follow , I can run it successfully
` model = BertModel.from_pretrained(
pretrained_model_name_or_path=args.cache_dir
)
` | 12-27-2021 10:53:38 | 12-27-2021 10:53:38 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,941 | closed | Enabling `tokenizers` upgrade. | # What does this PR do?
`tokenizers==0.11` is now available on all distributions locations (conda included)
and will enable truncating left on tokenizers that require it.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@SaulLu @LysandreJik
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 12-27-2021 10:04:57 | 12-27-2021 10:04:57 | |
transformers | 14,940 | closed | Fix duplicate call to save_checkpoint when using deepspeed | # What does this PR do?
Drop duplicate call to deepspeed.save_checkpoint(), the trainer.save_model() function already handles that case.
Following this change: https://github.com/huggingface/transformers/pull/14652/files#diff-ed55888e6665791fe92cc8fc0c499da54f4ace6738551cd9a2591881cda076deR1986
The call to save_checkpoint() was duplicated.
I found this issue after seeing the following logs (note the last 4 lines):
```
[INFO|trainer.py:2033] 2021-12-26 19:42:00,421 >> Saving model checkpoint to finetuned-ro-en-dev/checkpoint-2
[INFO|configuration_utils.py:425] 2021-12-26 19:42:00,423 >> Configuration saved in finetuned-ro-en-dev/checkpoint-2/config.json
[INFO|modeling_utils.py:1070] 2021-12-26 19:44:09,064 >> Model weights saved in finetuned-ro-en-dev/checkpoint-2/pytorch_model.bin
[INFO|tokenization_utils_base.py:2043] 2021-12-26 19:44:09,110 >> tokenizer config file saved in finetuned-ro-en-dev/checkpoint-2/tokenizer_config.json
[INFO|tokenization_utils_base.py:2049] 2021-12-26 19:44:09,112 >> Special tokens file saved in finetuned-ro-en-dev/checkpoint-2/special_tokens_map.json
[2021-12-26 19:44:09,596] [INFO] [logging.py:69:log_dist] [Rank 0] Saving model checkpoint: finetuned-ro-en-dev/checkpoint-2/global_step2/mp_rank_00_model_states.pt
[2021-12-26 19:59:09,484] [INFO] [engine.py:2964:_save_zero_checkpoint] zero checkpoint saved finetuned-ro-en-dev/checkpoint-2/global_step2/zero_pp_rank_0_mp_rank_00_optim_states.pt
[2021-12-26 19:59:09,575] [INFO] [logging.py:69:log_dist] [Rank 0] Saving model checkpoint: finetuned-ro-en-dev/checkpoint-2/global_step2/mp_rank_00_model_states.pt
[2021-12-26 20:16:17,005] [INFO] [engine.py:2964:_save_zero_checkpoint] zero checkpoint saved finetuned-ro-en-dev/checkpoint-2/global_step2/zero_pp_rank_0_mp_rank_00_optim_states.pt
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
- @stas00 @LysandreJik
| 12-27-2021 08:57:56 | 12-27-2021 08:57:56 | Closed PR as it was created from the wrong branch<|||||>continued in https://github.com/huggingface/transformers/pull/14946 |
transformers | 14,939 | closed | Cannot Convert Megatron GPT checkpoint | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.11.2
- Platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.10
- Python version: 3.8.10
- PyTorch version (GPU?): 1.10.0a0+3fd9dcf (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.-->
@stas00 @LysandreJik @jdemouth-nvidia
## Information
I am trying to convert trained Megatron GPT2 checkpoint to huggingface GPT model using the provided [script]("https://github.com/huggingface/transformers/blob/master/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py"), I have followed the steps as it is in the provided [model card]("https://huggingface.co/nvidia/megatron-gpt2-345m")
But I get this error
```
root@blr-dgxa100-1:/workspace/home/sean/kannada-gpt# python3 $MYDIR/transformers/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py $MYDIR/nvidia/megatron-gpt2-345m/model_optim_rng.pt
Extracting PyTorch state dictionary from /root/nvidia/megatron-gpt2-345m/model_optim_rng.pt
Traceback (most recent call last):
File "/root/transformers/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py", line 345, in <module>
main()
File "/root/transformers/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py", line 256, in main
input_state_dict = torch.load(args.path_to_checkpoint, map_location="cpu")
File "/opt/conda/lib/python3.8/site-packages/torch/serialization.py", line 607, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/opt/conda/lib/python3.8/site-packages/torch/serialization.py", line 882, in _load
result = unpickler.load()
File "/opt/conda/lib/python3.8/site-packages/torch/serialization.py", line 875, in find_class
return super().find_class(mod_name, name)
ModuleNotFoundError: No module named 'megatron.model.enums'
root@blr-dgxa100-1:/workspace/home/sean/kannada-gpt# cp kannada_gpt_checkpoint/iter_0210000/mp_rank_00/model_optim_rng.pt $MYDIR/nvidia/megatron-gpt2-345mroot@blr-dgxa100-1:/workspace/home/sean/kannada-gpt# python3 $MYDIR/transformers/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py $MYDIR/nvidia/megatron-gpt2-345m/model_optim_rng.pt
Extracting PyTorch state dictionary from /root/nvidia/megatron-gpt2-345m/model_optim_rng.pt
Traceback (most recent call last):
File "/root/transformers/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py", line 345, in <module>
main()
File "/root/transformers/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py", line 256, in main
input_state_dict = torch.load(args.path_to_checkpoint, map_location="cpu")
File "/opt/conda/lib/python3.8/site-packages/torch/serialization.py", line 607, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/opt/conda/lib/python3.8/site-packages/torch/serialization.py", line 882, in _load
result = unpickler.load()
File "/opt/conda/lib/python3.8/site-packages/torch/serialization.py", line 875, in find_class
return super().find_class(mod_name, name)
ModuleNotFoundError: No module named 'megatron.model.enums'
```
## To reproduce
Steps to reproduce the behavior:
Unfortunately, this happens only on my trained checkpoint, I am able to convert the existing open sourced checkpoints
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Should have converted the megatron checkpoint
<!-- A clear and concise description of what you would expect to happen. -->
| 12-27-2021 08:46:35 | 12-27-2021 08:46:35 | It's because your custom checkpoint includes a module namespace visible during training (and embedded in the checkpoint) but not at the conversion time.
The official checkpoint converts w/o errors:
```
git clone https://github.com/huggingface/transformers/
cd transformers
mkdir megatron_lm_345m
wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_lm_345m/versions/v0.0/zip -O megatron_lm_345m/checkpoint.zip
python src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py megatron_lm_345m/checkpoint.zip
Extracting PyTorch state dictionary from megatron_lm_345m/checkpoint.zip
Converting
Saving config
Adding GPT2TokenizerFast tokenizer files
Saving checkpoint to "megatron_lm_345m/pytorch_model.bin"
```
so it was probably created before Meg-LM added 'megatron.model.enums' and thus it works.
One way to solve your problem is to add `Megatron-LM` cloned path to your `PYTHONPATH`, e.g. using your setup:
```
git clone https://github.com/NVIDIA/Megatron-LM
git clone https://github.com/huggingface/transformers/
cd transformers
PYTHONPATH=../Megatron-LM python src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py /root/nvidia/megatron-gpt2-345m/model_optim_rng.pt
```
but basically **python is looking for `megatron/model/enums.py` and can't find it, so you need to tell python where to find it.**
If the relative path approach doesn't work, use the full path instead.
Note sure where to best document this though as it impacts only newer Megatron-LM code and the conversion script can't magically find Megatron-LM clone. i.e. we can't fix it on our side other than documenting this issue.
And since one can't install Megatron-LM as a package we can't make the script require it.
<|||||>Thank you @stas00 , adding to PYTHONPATH has helped, maybe editing the [model card ](https://huggingface.co/nvidia/megatron-gpt2-345m) would be the right thing, let me know if I want to close this issue<|||||>Great, thank you for validating my suggestion.
We need to update 3 model cards:
- https://huggingface.co/nvidia/megatron-gpt2-345m/blob/main/README.md
- https://huggingface.co/nvidia/megatron-bert-uncased-345m/blob/main/README.md
- https://huggingface.co/nvidia/megatron-bert-cased-345m/blob/main/README.md
I need to figure out how to get perms to do so. So let's keep this open until then.
and I will also add a note to both megatron conversion scripts since all of them are impacted.
<|||||>If I understand correctly, you recommend to add `PYTHONPATH=../Megatron-LM` right before the call to the `python` interpreter. That would go to line 118 of https://huggingface.co/nvidia/megatron-gpt2-345m/blob/main/README.md and similarly on the two other files. Is that right? <|||||>Hi @jdemouth,
I was planning to add the same content as I proposed in this PR: https://github.com/huggingface/transformers/pull/14956
Note that this is not required, but needed in some cases. And for example if the checkpoint was produced with another Megatron-LM clone, like https://github.com/microsoft/Megatron-DeepSpeed/ - it's that clone that will be needed instead, since its `enums` are yet different. So basically the clone of the repo that the checkpoint was created with is needed. I probably should add this additional info to the PR as well.
I see you have access to the nvidia org on the hub and perhaps you can do the honours of editing the cards?
Thanks.<|||||>Sorry for the late reply. I have just reviewed the PR #14956. Let me update the cards. <|||||>Thank you for editing the cards, @jdemouth! |
transformers | 14,938 | closed | Question: Object of type EncoderDecoderConfig is not JSON serializable | Hi.
An error occurred when I used Trainer to train and save EncoderDecoderModel.
```python
File "/home/jwli/ljw/study/hotpotqa/roberta_seq2seq/roberta_for_seq2seq.py", line 482, in <module>
run(model_args, data_args, training_args)
File "/home/jwli/ljw/study/hotpotqa/roberta_seq2seq/roberta_for_seq2seq.py", line 465, in run
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/trainer.py", line 1391, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/trainer.py", line 1495, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/trainer.py", line 1557, in _save_checkpoint
self.save_model(output_dir)
File "/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/trainer.py", line 1961, in save_model
self._save(output_dir)
File "/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/trainer.py", line 2009, in _save
self.model.save_pretrained(output_dir, state_dict=state_dict)
File "/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/modeling_utils.py", line 1053, in save_pretrained
model_to_save.config.save_pretrained(save_directory)
File "/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/configuration_utils.py", line 416, in save_pretrained
self.to_json_file(output_config_file, use_diff=True)
File "/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/configuration_utils.py", line 739, in to_json_file
writer.write(self.to_json_string(use_diff=use_diff))
File "/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/configuration_utils.py", line 725, in to_json_string
return json.dumps(config_dict, indent=2, sort_keys=True) + "\n"
File "/home/jwli/anaconda3/envs/study/lib/python3.7/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/home/jwli/anaconda3/envs/study/lib/python3.7/json/encoder.py", line 201, in encode
chunks = list(chunks)
File "/home/jwli/anaconda3/envs/study/lib/python3.7/json/encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/home/jwli/anaconda3/envs/study/lib/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/home/jwli/anaconda3/envs/study/lib/python3.7/json/encoder.py", line 438, in _iterencode
o = _default(o)
File "/home/jwli/anaconda3/envs/study/lib/python3.7/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type EncoderDecoderConfig is not JSON serializable
```
My model and Config define the following code.
```python
tokenizer = RobertaTokenizerFast.from_pretrained(model_args.tokenizer_name)
encoder_config = RobertaConfig.from_pretrained(model_args.encoder_model_name_or_path)
decoder_config = RobertaConfig.from_pretrained(model_args.decoder_model_name_or_path)
encoder_decoder_config = EncoderDecoderConfig.from_encoder_decoder_configs(encoder_config, decoder_config)
model = RobertaForSeq2Seq.from_encoder_decoder_pretrained(model_args.encoder_model_name_or_path,
model_args.decoder_model_name_or_path,
config=encoder_decoder_config, tie_encoder_decoder=True)
model.config.decoder_start_token_id = tokenizer.bos_token_id
model.config.eos_token_id = tokenizer.eos_token_id
model.config.max_length = 64
model.config.early_stopping = True
model.config.no_repeat_ngram_size = 3
model.config.length_penalty = 2.0
model.config.num_beams = 4
model.config.pad_token_id = tokenizer.pad_token_id
```
This error occurred because EncoderDecoderConfig cannot be converted to json format. But I don't know how to modify it.
```python
ERROR OCCURRED:
if use_diff is True:
config_dict = self.to_diff_dict()
else:
config_dict = self.to_dict()
return json.dumps(config_dict, indent=2, sort_keys=True) + "\n"
```
I look forward to your help! Thanks!
@jplu @patrickvonplaten | 12-27-2021 05:56:19 | 12-27-2021 05:56:19 | Hey @Captainr22,
Thanks for reporting this issue here. I now know how you create the model. Could you also provide the code to run this model with the Trainer? A bash command is enough in case you are using one of the official examples :-)<|||||>Hi @patrickvonplaten ,
Here is my code to run my model. It looks very ordinary.
```test
CUDA_VISIBLE_DEVICES=1,2 python roberta_for_seq2seq.py --output_dir=./seq2seq_output --train_dataset_name=../datasets/seq2seq_hotpotqa_train.csv --eval_dataset_name=../datasets/seq2seq_hotpotqa_eval.csv --do_train --do_eval --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --evaluation_strategy=epoch --save_strategy=no --overwrite_output_dir --num_train_epochs=10 --gradient_accumulation_steps=2 --warmup_steps=5000
```
And here is my Trainer code.
```python
trainer = Seq2SeqHotpotQuestionAnsweringTrainer(
model=model,
args=training_args,
train_dataset=train_dataset if training_args.do_train else None,
eval_dataset=eval_dataset if training_args.do_eval else None,
eval_examples=eval_examples if training_args.do_eval else None,
tokenizer=tokenizer,
data_collator=data_collator,
post_process_function=post_processing_function,
compute_metrics=None,
)
class Seq2SeqHotpotQuestionAnsweringTrainer(Seq2SeqTrainer):
def __init__(self, *args, eval_examples=None, post_process_function=None, **kwargs):
super().__init__(*args, **kwargs)
self.eval_examples = eval_examples
self.post_process_function = post_process_function
def evaluate(self, eval_dataset=None, eval_examples=None, ignore_keys=None, metric_key_prefix: str = 'eval'):
pass
```
Besides that, I found that if I don't add the config when defining the model, the problem doesn't occur. Like this:
```python
model = RobertaForSeq2Seq.from_encoder_decoder_pretrained(model_args.encoder_model_name_or_path,
model_args.decoder_model_name_or_path,
tie_encoder_decoder=True)
```
I think this may be a bug in transformers.
The problem with this issue is probably the same as mine. #5459
<|||||>Hey @Captainr22,
I don't know what `roberta_for_seq2seq.py` is. Could you please provide all the code that is needed to reproduce the error.<|||||>Hi @patrickvonplaten ,
I will provide you with another code which is using my cnn_dm dataset to train the EncoderDecoderModel. This code also has the above error. The roberta_decoder model is the RobertaForCausalLM which has 6 Transformer layers. Thank you for your attention!
```python
import nltk
import numpy as np
import os
import logging
import datasets
import transformers
import sys
from typing import Optional
from dataclasses import dataclass, field
from datasets import load_dataset, load_metric
from transformers import HfArgumentParser,Seq2SeqTrainingArguments, set_seed, BartTokenizer, BartForConditionalGeneration,\
Seq2SeqTrainer, DataCollatorForSeq2Seq, RobertaTokenizerFast, EncoderDecoderModel, EncoderDecoderConfig, RobertaConfig
from transformers.trainer_utils import get_last_checkpoint
logger = logging.getLogger(__name__)
@dataclass
class ModelArguments:
model_name_or_path: str = field(
default="/home/jwli/models/facebook/bart-base",
metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}
)
tokenizer_name: Optional[str] = field(
default="/home/jwli/models/facebook/bart-base",
metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
)
@dataclass
class DataTrainingArguments:
dataset_name: Optional[str] = field(
default="xsum", metadata={"help": "The name of the dataset to use (via the datasets library)."}
)
max_source_length: Optional[int] = field(
default=1024,
metadata={
"help": "The maximum total input sequence length after tokenization. Sequences longer "
"than this will be truncated, sequences shorter will be padded."
},
)
max_target_length: Optional[int] = field(
default=256,
metadata={
"help": "The maximum total sequence length for target text after tokenization. Sequences longer "
"than this will be truncated, sequences shorter will be padded."
},
)
pad_to_max_length: bool = field(
default=False,
metadata={
"help": "Whether to pad all samples to model maximum sentence length. "
"If False, will pad the samples dynamically when batching to the maximum length in the batch. More "
"efficient on GPU but very bad for TPU."
},
)
val_max_target_length: Optional[int] = field(
default=256,
metadata={
"help": "The maximum total sequence length for validation target text after tokenization. Sequences longer "
"than this will be truncated, sequences shorter will be padded. Will default to `max_target_length`."
"This argument is also used to override the ``max_length`` param of ``model.generate``, which is used "
"during ``evaluate`` and ``predict``."
},
)
max_train_samples: Optional[int] = field(
default=None,
metadata={
"help": "For debugging purposes or quicker training, truncate the number of training examples to this "
"value if set."
},
)
max_eval_samples: Optional[int] = field(
default=None,
metadata={
"help": "For debugging purposes or quicker training, truncate the number of evaluation examples to this "
"value if set."
},
)
max_predict_samples: Optional[int] = field(
default=None,
metadata={
"help": "For debugging purposes or quicker training, truncate the number of prediction examples to this "
"value if set."
},
)
num_beams: Optional[int] = field(
default=4,
metadata={
"help": "Number of beams to use for evaluation. This argument will be passed to ``model.generate``, "
"which is used during ``evaluate`` and ``predict``."
},
)
ignore_pad_token_for_loss: bool = field(
default=True,
metadata={
"help": "Whether to ignore the tokens corresponding to padded labels in the loss computation or not."
},
)
if __name__ == '__main__':
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, Seq2SeqTrainingArguments))
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
last_checkpoint = None
if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir:
last_checkpoint = get_last_checkpoint(training_args.output_dir)
if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0:
raise ValueError(
f"Output directory ({training_args.output_dir}) already exists and is not empty. "
"Use --overwrite_output_dir to overcome."
)
elif last_checkpoint is not None and training_args.resume_from_checkpoint is None:
logger.info(
f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change "
"the `--output_dir` or add `--overwrite_output_dir` to train from scratch."
)
set_seed(training_args.seed)
dataset = load_dataset('csv', data_files={'train':"/home/jwli/ljw/study/use_trainer/xsum/cnn_data/train.csv",
'eval':"/home/jwli/ljw/study/use_trainer/xsum/cnn_data/valid.csv"})
# bart
# tokenizer = BartTokenizer.from_pretrained(model_args.tokenizer_name)
# model = BartForConditionalGeneration.from_pretrained(model_args.model_name_or_path)
# model.resize_token_embeddings(len(tokenizer))
# encoder decoder model
tokenizer = RobertaTokenizerFast.from_pretrained("/home/jwli/models/roberta-large")
encoder_config = RobertaConfig.from_pretrained("/home/jwli/models/roberta-large")
decoder_config = RobertaConfig.from_pretrained("/home/jwli/models/roberta-decoder")
encoder_decoder_config = EncoderDecoderConfig.from_encoder_decoder_configs(encoder_config, decoder_config)
model = EncoderDecoderModel.from_encoder_decoder_pretrained("/home/jwli/models/roberta-large",
"/home/jwli/models/roberta-decoder",
config=encoder_decoder_config,
tie_encoder_decoder=True)
model.config.decoder_start_token_id = tokenizer.bos_token_id
model.config.eos_token_id = tokenizer.eos_token_id
model.config.max_length = 64
model.config.early_stopping = True
model.config.no_repeat_ngram_size = 3
model.config.length_penalty = 2.0
model.config.num_beams = 4
model.config.pad_token_id = tokenizer.pad_token_id
padding = "max_length" if data_args.pad_to_max_length else False
def _preprocess_function(examples):
inputs = examples['document']
targets = examples['summary']
model_inputs = tokenizer(inputs, max_length=data_args.max_source_length, padding=padding, truncation=True)
model_targets = tokenizer(targets, max_length=data_args.max_target_length, padding=padding, truncation=True)
model_targets['input_ids'] = [
[(l if l != tokenizer.pad_token_id else -100) for l in model_target] for model_target in model_targets['input_ids']
]
model_inputs['labels'] = model_targets['input_ids']
return model_inputs
train_dataset, eval_dataset, predict_dataset = None, None, None
if training_args.do_train:
train_dataset = dataset['train']
if data_args.max_train_samples is not None:
train_dataset = train_dataset.select(range(data_args.max_train_samples))
with training_args.main_process_first(desc="train dataset map pre-processing"):
train_dataset = train_dataset.map(
_preprocess_function, # 传入function
batched=True,
remove_columns=['document', 'summary'], # map后原来数据集中的数据就不需要了,只需要保留input_ids,attention_mask和labels
num_proc=16,
desc="Running Tokenizer on train dataset",
)
if training_args.do_eval:
eval_dataset = dataset['eval']
if data_args.max_eval_samples is not None:
eval_dataset = eval_dataset.select(range(data_args.max_eval_samples))
with training_args.main_process_first(desc="validation dataset map pre-processing"):
eval_dataset = eval_dataset.map(
_preprocess_function,
batched=True,
remove_columns=['document', 'summary'],
num_proc=16,
desc="Running Tokenizer on eval dataset",
)
if training_args.do_predict:
predict_dataset = dataset['test']
if data_args.max_predict_samples is not None:
predict_dataset = predict_dataset.select(range(data_args.max_predict_samples))
with training_args.main_process_first(desc="prediction dataset map pre-processing"):
predict_dataset = predict_dataset.map(
_preprocess_function,
batched=True,
remove_columns=['document', 'summary'],
num_proc=16,
desc="Running Tokenizer on predict dataset",
)
data_collator = DataCollatorForSeq2Seq(
tokenizer,
model=model,
label_pad_token_id=-100,
pad_to_multiple_of=8 if training_args.fp16 else None,
)
metric = load_metric("rouge")
def _postprocess_text(preds, labels):
preds = [pred.strip() for pred in preds]
labels = [label.strip() for label in labels]
# rougeLSum expects newline after each sentence
preds = ["\n".join(nltk.sent_tokenize(pred)) for pred in preds]
labels = ["\n".join(nltk.sent_tokenize(label)) for label in labels]
return preds, labels
def _computer_metric(eval_preds):
preds, labels = eval_preds
if isinstance(preds, tuple):
preds = preds[0]
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
if data_args.ignore_pad_token_for_loss:
# Replace -100 in the labels as we can't decode them.
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
# Some simple post-processing
decoded_preds, decoded_labels = _postprocess_text(decoded_preds, decoded_labels)
result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)
# Extract a few results from ROUGE
result = {key: value.mid.fmeasure * 100 for key, value in result.items()}
prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds]
result["gen_len"] = np.mean(prediction_lens)
result = {k: round(v, 4) for k, v in result.items()}
return result
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
train_dataset=train_dataset if training_args.do_train else None,
eval_dataset=eval_dataset if training_args.do_eval else None,
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=_computer_metric if training_args.predict_with_generate else None,
)
if training_args.do_train:
checkpoint = None
if training_args.resume_from_checkpoint is not None:
checkpoint = training_args.resume_from_checkpoint
elif last_checkpoint is not None:
checkpoint = last_checkpoint
train_result = trainer.train(resume_from_checkpoint=checkpoint)
metrics = train_result.metrics
max_train_samples = (
data_args.max_train_samples if data_args.max_train_samples is not None else len(train_dataset)
)
metrics["train_samples"] = min(max_train_samples, len(train_dataset))
trainer.log_metrics("train", metrics)
trainer.save_metrics("train", metrics)
trainer.save_state()
results = {}
max_length = (
training_args.generation_max_length
if training_args.generation_max_length is not None
else data_args.val_max_target_length
)
num_beams = data_args.num_beams if data_args.num_beams is not None else training_args.generation_num_beams
```<|||||>Ok I think I can finally reproduce the error :-)
This doesn't work:
```python
#!/usr/bin/env python3
from transformers import RobertaConfig, EncoderDecoderConfig, EncoderDecoderModel
model_id = "roberta-base"
encoder_config = RobertaConfig.from_pretrained(model_id)
decoder_config = RobertaConfig.from_pretrained(model_id)
encoder_decoder_config = EncoderDecoderConfig.from_encoder_decoder_configs(encoder_config, decoder_config)
model = EncoderDecoderModel.from_encoder_decoder_pretrained(model_id, model_id, config=encoder_decoder_config, tie_encoder_decoder=True)
model.config.decoder_start_token_id = 0
model.config.eos_token_id = 0
model.config.max_length = 64
model.config.early_stopping = True
model.config.no_repeat_ngram_size = 3
model.config.length_penalty = 2.0
model.config.num_beams = 4
model.config.pad_token_id = 0
model.save_pretrained("./")
```
since it fails with:
```bash
File "/usr/lib/python3.8/json/encoder.py", line 201, in encode
chunks = list(chunks)
File "/usr/lib/python3.8/json/encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/usr/lib/python3.8/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib/python3.8/json/encoder.py", line 438, in _iterencode
o = _default(o)
File "/usr/lib/python3.8/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type EncoderDecoderConfig is not JSON serializable
```
no?<|||||>The problem in your code is that you pass an `EncoderDecoderConfig` into the `from_encoder_decoder_pretrained(...)` method, which is not allowed. You should instead pass an `encoder_config` and a `decoder_config` argument as follows:
```python
#!/usr/bin/env python3
from transformers import RobertaConfig, EncoderDecoderConfig, EncoderDecoderModel
model_id = "roberta-base"
encoder_config = RobertaConfig.from_pretrained(model_id)
decoder_config = RobertaConfig.from_pretrained(model_id)
model = EncoderDecoderModel.from_encoder_decoder_pretrained(model_id, model_id, encoder_config=encoder_config, decoder_config=decoder_config, tie_encoder_decoder=True)
model.config.decoder_start_token_id = 0
model.config.eos_token_id = 0
model.config.max_length = 64
model.config.early_stopping = True
model.config.no_repeat_ngram_size = 3
model.config.length_penalty = 2.0
model.config.num_beams = 4
model.config.pad_token_id = 0
model.save_pretrained("./")
```<|||||>👍 I will try your suggestion.
I am sorry to have taken you so long because of my problems. :-(
Thank you very much!<|||||>No worries! Hope it works now for your use case |
transformers | 14,937 | closed | Cannot instantiate model under dopamine | ## Environment info
- `transformers` version: 4.13.0.dev0
- Platform: Linux-5.11.0-43-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.7.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: I don't specify?..
### Who can help
@patrickvonplaten , @Rocketknight1
## Information
Model I am using (Bert, XLNet ...): T5
The problem arises when using:
* [x] my own modified scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Clone https://github.com/kovkev/dopamine
2. Setup dopamine dependencies
3. python3.9 mytest.py
If I instantiate the model at other places in the script, it's fine. However, if I instantiate the model in dopamine/discrete_domains/run_experiment.py in that location, I get an error:
```
/usr/lib/python3.9/site-packages/ale_py/roms/__init__.py:94: DeprecationWarning: Automatic importing of atari-py roms won't be supported in future releases of ale-py. Please migrate over to using `ale-import-roms` OR an ALE-supported ROM package. To make this warning disappear you can run `ale-import-roms --import-from-pkg atari_py.atari_roms`.For more information see: https://github.com/mgbellemare/Arcade-Learning-Environment#rom-management
_RESOLVED_ROMS = _resolve_roms()
2021-12-27 04:44:31.214754: W tensorflow/python/util/util.cc:368] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
All model checkpoint layers were used when initializing TFT5ForConditionalGeneration.
All the layers of TFT5ForConditionalGeneration were initialized from the model checkpoint at t5-small.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFT5ForConditionalGeneration for predictions without further training.
>>>done0
INFO:absl:Creating TrainRunner ...
WARNING:tensorflow:From /usr/lib/python3.9/site-packages/tensorflow/python/compat/v2_compat.py:111: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
WARNING:tensorflow:From /usr/lib/python3.9/site-packages/tensorflow/python/compat/v2_compat.py:111: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
A.L.E: Arcade Learning Environment (version +978d2ce)
[Powered by Stella]
Traceback (most recent call last):
File "/home/project/dopamine/mytest.py", line 55, in <module>
dqn_runner = run_experiment.create_runner(DQN_PATH, schedule='continuous_train')
File "/usr/lib/python3.9/site-packages/gin/config.py", line 1605, in gin_wrapper
utils.augment_exception_message_and_reraise(e, err_str)
File "/usr/lib/python3.9/site-packages/gin/utils.py", line 41, in augment_exception_message_and_reraise
raise proxy.with_traceback(exception.__traceback__) from None
File "/usr/lib/python3.9/site-packages/gin/config.py", line 1582, in gin_wrapper
return fn(*new_args, **new_kwargs)
File "/home/project/dopamine/dopamine/discrete_domains/run_experiment.py", line 145, in create_runner
return TrainRunner(base_dir, create_agent)
File "/usr/lib/python3.9/site-packages/gin/config.py", line 1605, in gin_wrapper
utils.augment_exception_message_and_reraise(e, err_str)
File "/usr/lib/python3.9/site-packages/gin/utils.py", line 41, in augment_exception_message_and_reraise
raise proxy.with_traceback(exception.__traceback__) from None
File "/usr/lib/python3.9/site-packages/gin/config.py", line 1582, in gin_wrapper
return fn(*new_args, **new_kwargs)
File "/home/project/dopamine/dopamine/discrete_domains/run_experiment.py", line 562, in __init__
super(TrainRunner, self).__init__(base_dir, create_agent_fn,
File "/usr/lib/python3.9/site-packages/gin/config.py", line 1605, in gin_wrapper
utils.augment_exception_message_and_reraise(e, err_str)
File "/usr/lib/python3.9/site-packages/gin/utils.py", line 41, in augment_exception_message_and_reraise
raise proxy.with_traceback(exception.__traceback__) from None
File "/usr/lib/python3.9/site-packages/gin/config.py", line 1582, in gin_wrapper
return fn(*new_args, **new_kwargs)
File "/home/project/dopamine/dopamine/discrete_domains/run_experiment.py", line 230, in __init__
self._agent = create_agent_fn(self._sess, self._environment,
File "/usr/lib/python3.9/site-packages/gin/config.py", line 1605, in gin_wrapper
utils.augment_exception_message_and_reraise(e, err_str)
File "/usr/lib/python3.9/site-packages/gin/utils.py", line 41, in augment_exception_message_and_reraise
raise proxy.with_traceback(exception.__traceback__) from None
File "/usr/lib/python3.9/site-packages/gin/config.py", line 1582, in gin_wrapper
return fn(*new_args, **new_kwargs)
File "/home/project/dopamine/dopamine/discrete_domains/run_experiment.py", line 117, in create_agent
another_model = TFAutoModelForSeq2SeqLM.from_pretrained("t5-small")
File "/usr/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 441, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/usr/lib/python3.9/site-packages/transformers/modeling_tf_utils.py", line 1595, in from_pretrained
model(model.dummy_inputs) # build the network with dummy inputs
File "/usr/lib/python3.9/site-packages/keras/engine/base_layer_v1.py", line 765, in __call__
outputs = call_fn(cast_inputs, *args, **kwargs)
File "/usr/lib/python3.9/site-packages/tensorflow/python/autograph/impl/api.py", line 699, in wrapper
raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:
File "/usr/lib/python3.9/site-packages/transformers/models/t5/modeling_tf_t5.py", line 1422, in call *
inputs["encoder_outputs"] = self.encoder(
File "/usr/lib/python3.9/site-packages/transformers/models/t5/modeling_tf_t5.py", line 688, in call *
inputs["inputs_embeds"] = self.embed_tokens(inputs["input_ids"])
File "/usr/lib/python3.9/site-packages/transformers/modeling_tf_utils.py", line 2003, in __call__ *
return self._layer(inputs, mode)
File "/usr/lib/python3.9/site-packages/keras/engine/base_layer_v1.py", line 745, in __call__ **
self._maybe_build(inputs)
File "/usr/lib/python3.9/site-packages/keras/engine/base_layer_v1.py", line 2074, in _maybe_build
self.build(input_shapes)
File "/usr/lib/python3.9/site-packages/transformers/modeling_tf_utils.py", line 1760, in build
self.weight = self.add_weight(
File "/usr/lib/python3.9/site-packages/keras/engine/base_layer_v1.py", line 423, in add_weight
variable = self._add_variable_with_custom_getter(
File "/usr/lib/python3.9/site-packages/keras/engine/base_layer_utils.py", line 117, in make_variable
return tf.compat.v1.Variable(
File "/usr/lib/python3.9/site-packages/keras/initializers/initializers_v2.py", line 416, in __call__
dtype = _assert_float_dtype(_get_dtype(dtype))
File "/usr/lib/python3.9/site-packages/keras/initializers/initializers_v2.py", line 969, in _assert_float_dtype
raise ValueError(f'Expected floating point type, got {dtype}.')
ValueError: Expected floating point type, got <dtype: 'int32'>.
In call to configurable 'create_agent' (<function create_agent at 0x7fbe9c160430>)
In call to configurable 'Runner' (<class 'dopamine.discrete_domains.run_experiment.Runner'>)
In call to configurable 'TrainRunner' (<class 'dopamine.discrete_domains.run_experiment.TrainRunner'>)
In call to configurable 'create_runner' (<function create_runner at 0x7fbe9c160af0>)
```
## Expected behavior
The model loads | 12-27-2021 03:46:05 | 12-27-2021 03:46:05 | Hey @kovkev,
Could you please provide an easily reproducible code snippet? I don't really know what `mytest.py` is<|||||>Do clone github.com/kovkev/dopamine , where I patch dopamine with my code snippet<|||||>Hi @patrickvonplaten , quick Github question - do you get notified if I make a comment that does not @ you?<|||||>Hey @kovkev,
Could you please take a look at how to post issues here: https://github.com/huggingface/transformers/blob/master/ISSUES.md
Please note that we are getting hundreds of notifications every day and cannot spend a lot of time on issues where we don't know how to reproduce the error or that includes libraries that are not maintained by us.<|||||>@patrickvonplaten I have made it easier to understand the situation. I created a colab - https://github.com/kovkev/dopamine/blob/master/mynotebook.ipynb . Note that my instantiation of the transformer is at dopmaine/dopamine/discrete_domains/run_experiment.py line 115-117<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,936 | closed | A warning is raised when using DistributedDataParallel of PyTorch | When I train huggingface-transformers model in multi-cards and multi-machines using DistributedDataParallel (DDP) of PyTorch, a warning is always output at each epoch:
```
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
```
Does this warning degrade the performace of parallel training? How to fix it? Thanks | 12-27-2021 03:21:29 | 12-27-2021 03:21:29 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,935 | closed | [performance doc] Power and Cooling | This PR supplies performance doc additions:
- Power and Cooling
- Grad accumulation
- links to various benchmarks I recently created
- a direct link to the scalabilty doc
@LysandreJik, @sgugger
| 12-27-2021 03:00:05 | 12-27-2021 03:00:05 | |
transformers | 14,934 | closed | [benchmark tool] trainer-benchmark.py | This PR adds a benchmarking tool for HF Trainer args - e.g. compare --fp16 vs --bf16 performance, but can do that for multiple dimensions and it prints automatic tables suitable for instant pasting in Issues, including relative performance. e.g.:
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:---------------------------------|------------------------------------:|------------:|----------------:|
| --per_device_train_batch_size 1 | 7.77 | 0 | 1.90 |
| --per_device_train_batch_size 2 | 15.51 | 100 | 2.01 |
| --per_device_train_batch_size 4 | 29.66 | 282 | 2.09 |
| --per_device_train_batch_size 8 | 61.16 | 687 | 2.16 |
| --per_device_train_batch_size 16 | 115.84 | 1392 | 2.25 |
| --per_device_train_batch_size 32 | 224.96 | 2797 | 2.38 |
This is produced by:
```
CUDA_VISIBLE_DEVICES=0 python ./scripts/benchmark/trainer-benchmark.py \
--base-cmd \
' examples/pytorch/translation/run_translation.py --model_name_or_path t5-base \
--output_dir output_dir --do_train --label_smoothing 0.1 --logging_strategy no \
--save_strategy no --max_source_length 512 \
--max_target_length 512 --num_train_epochs 1 --overwrite_output_dir \
--source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" \
--source_prefix "translate English to Romanian: " --warmup_steps 50 \
--max_train_samples 5000 --dataloader_num_workers 2 --bf16' \
--target-metric-key train_samples_per_second --repeat-times 1 --variations \
'--per_device_train_batch_size 1|--per_device_train_batch_size 2|--per_device_train_batch_size 4|--per_device_train_batch_size 8|--per_device_train_batch_size 16|--per_device_train_batch_size 32' \
--report-metric-keys train_loss --repeat-times 1
```
To add more dimensions simply add another `--variations` arg, e.g.:
```
--variations '|--fp16|--bf16' '--tf32 0|--tf32 1'
```
will lead to a Cartesian product of each arg with an outcome of:
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:----------------|------------------------------------:|------------:|----------------:|
| --tf32 0 | 272.59 | 0 | 2.49 |
| --tf32 1 | 581.61 | 113 | 2.49 |
| --fp16 --tf32 0 | 643.07 | 136 | 2.49 |
| --fp16 --tf32 1 | 635.24 | 133 | 2.49 |
| --bf16 --tf32 0 | 616.23 | 126 | 2.50 |
| --bf16 --tf32 1 | 612.59 | 125 | 2.50 |
See the doc at the beginning of the script for details. But it has lots of cool features like automatically reporting the hardware/software, preformatting everything for both copy-n-paste into the Issues/docs and also an easy to see console-only 2nd version at the end for when you debug things.
And more practical examples and reports for its use are here: https://github.com/huggingface/transformers/issues/15026 and https://github.com/huggingface/transformers/issues/14608
It should be relatively easy to adopt this tool to be used with `accelerate` or with any other command line tool as long as there is a defined way to get to results, so each tool will need to have its own sub-class if we decided to extend it. I think @siddk suggested he might look into extending it.
But let's see first that you like it and I placed it in a good location. The intention for it to be an internal tool, so I hope any `map` and such code will be tolerated.
It's possible that we could use it for some basic regression testing. But it'd need to be further extended to support storing results and detecting regressions.
@LysandreJik, @patrickvonplaten, @sgugger, @patil-suraj | 12-27-2021 01:35:59 | 12-27-2021 01:35:59 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 14,933 | closed | Add parameters to make custom backbone for detr | # What does this PR do?
Added few parameters to be able to create custom backbone models for different use cases (partially explained in mentioned issue)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Closes #14875
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
Maybe @LysandreJik can help with review? | 12-26-2021 14:54:53 | 12-26-2021 14:54:53 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,932 | closed | Flax wav2vec2 pretrain | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-26-2021 13:48:45 | 12-26-2021 13:48:45 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,931 | closed | AutoTokenizer hash value got change after datasets.map | ## Environment info
- `transformers` version: 4.15.0
- Platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.27
- Python version: 3.9.7
- PyTorch version (GPU?): 1.10.1+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik @lhoestq
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] my own modified scripts
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: mrpc
## To reproduce
Steps to reproduce the behavior:
1. trash huggingface datasets cache
2. run the following code:
```python
from transformers import AutoTokenizer, BertTokenizer
from datasets import load_dataset
from datasets.fingerprint import Hasher
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
def tokenize_function(example):
return tokenizer(example["sentence1"], example["sentence2"], truncation=True)
raw_datasets = load_dataset("glue", "mrpc")
print(Hasher.hash(tokenize_function))
print(Hasher.hash(tokenizer))
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
print(Hasher.hash(tokenize_function))
print(Hasher.hash(tokenizer))
```
got:
```
Reusing dataset glue (/home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad)
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1112.35it/s]
f4976bb4694ebc51
3fca35a1fd4a1251
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 6.96ba/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 15.25ba/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 5.81ba/s]
d32837619b7d7d01
5fd925c82edd62b6
```
3. run `raw_datasets.map(tokenize_function, batched=True)` again and see some dataset are not using cache.
## Expected behavior
`AutoTokenizer` work like specific Tokenizer (The hash value don't change after map):
```python
from transformers import AutoTokenizer, BertTokenizer
from datasets import load_dataset
from datasets.fingerprint import Hasher
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
def tokenize_function(example):
return tokenizer(example["sentence1"], example["sentence2"], truncation=True)
raw_datasets = load_dataset("glue", "mrpc")
print(Hasher.hash(tokenize_function))
print(Hasher.hash(tokenizer))
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
print(Hasher.hash(tokenize_function))
print(Hasher.hash(tokenizer))
```
got:
```
Reusing dataset glue (/home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad)
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1091.22it/s]
46d4b31f54153fc7
5b8771afd8d43888
Loading cached processed dataset at /home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-6b07ff82ae9d5c51.arrow
Loading cached processed dataset at /home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-af738a6d84f3864b.arrow
Loading cached processed dataset at /home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-531d2a603ba713c1.arrow
46d4b31f54153fc7
5b8771afd8d43888
```
| 12-26-2021 11:48:42 | 12-26-2021 11:48:42 | It seems like this issue also occur with other AutoClass like `AutoFeatureExtractor`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@lhoestq Hi, can you look at this issue. I don't know whether I should reported in datasets or transformers.<|||||>Hi ! This should be handled by `datasets` IMO - feel free to create an issue in the dataset github repository: https://github.com/huggingface/datasets
I tried running the code above but the hash didn't change, I wasn't able to reproduce the issue<|||||>@lhoestq Hi, I reported this on datasets https://github.com/huggingface/datasets/issues/3638<|||||>Hi @tshu-w
(For other readers, interesting comment is here: https://github.com/huggingface/datasets/issues/3638#issuecomment-1023280361 .)
It seems your example fall into this category
> We could try and set these 2 dicts at initialization time, but it wouldn't work if a user modified the tokenizer state later
The call is this:
> `tokenizer(example["sentence1"], example["sentence2"], truncation=True)`
This by definition, will modify the tokenizer underlying state since it has to modify the TruncationParams to set it to True.
The only way you can actually fix it it to call it once before calling the map function, like so:
```python
from transformers import AutoTokenizer, BertTokenizer
from datasets import load_dataset
from datasets.fingerprint import Hasher
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
# ADD THIS
tokenizer(example["sentence1"], example["sentence2"], truncation=True)
def tokenize_function(example):
return tokenizer(example["sentence1"], example["sentence2"], truncation=True)
# ... rest of the script should work, and hashes be the same.
```
Sorry I missed that reading the first issue. I had thought that this was triggered by some default configuration of the tokenizer (that wasn't properly set at initialization time) but this isn't the case.
---------------------------------
@lhoestq tagging you too, since looking into this issue, I realized that `hash(tokenizer)` kept consistant, while `Hasher.hash(..)` wasn't. Maybe something can be done ? Or taking the hash function after 1 iteration ?
Actually the only state maintained by `transformers` FastTokenizer themselves are `padding_side` and `truncation_side`, no other arguments are ever kept within the class itself (meaning we cannot create the proper state before the call for them)
And a last option would be to make `tokenizers` Tokenizer themselves become stateless. Optimization-wise I don't know if the hit of passing the same arguments over and over will be significant or not (it probably needs to be checked though). But it's also a pretty big change I think.<|||||>Thanks for the ideas @Narsil :)
`hash` is not well-suited for caching unfortunately: it may not return the same hash in two different sessions. In the same session, two identical objects might not even have the same hash (try calling `hash(lambda x: x)` several times for example). This would lead to lots of cache misses.
Taking the hash after the first iteration can do the job, but the downside is that it would require users to wait for the first batch to be processed before checking the cache, which can be confusing IMO.
Which `TruncationParams` are you talking about exactly ? Would it make sense to make the `datasets` Hasher ignore this ?<|||||>Summarizing a discussion that happened orally:
The best course of action right now is to try and modify `__setstate__` , `__getstate__` of the FastTokenizers, to override the default pickle behavior.
- That's the standard way to override things in `datasets`.
- We can do that, because, since `transformers` `FastTokenizer` do not hold any state (except `{padding|truncation}_side` the actual thing that's going to cause a cache miss on `datasets` is going to be either:
- Modifying the map function code
- Modifying any of the associated values to that code
- The state of the `tokenizers` (specifically `_tokenizer.truncation` and `_tokenizer.padding`) will **NOT**.
This is a pretty big change, but seems currently like the best course of action.
Since we're touching something relatively core we need to be extra careful:
- Making sure we can alway unpickle already pickled tokenizers, and make sure it stays that way in the future
- Making sure the cache hit/miss of `datasets` is kept working through time
- Making sure `_tokenizer` pickling/unpickling can still happen correctly (this one still has state that has to be maintained).
- Making sure both pickling behaviors don't interact in a nasty fashion.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Unstale.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Unstale. General Ping @Narsil. Is there anything I can do.<|||||># TL;DR
Call the function once on a dummy example beforehand will fix it.
```python
tokenizer("Some", "test", truncation=True)
```
# Long answer
If I remember the last status, it's hard doing anything, since the call itself
```python
tokenizer(example["sentence1"], example["sentence2"], truncation=True)
```
will modify the tokenizer. It's the `truncation=True` that modifies the tokenizer to put it into truncation mode if you will.
Calling the tokenizer once with that argument would fix the cache.
Finding a fix that :
- Doesn't imply a huge chunk of work on `tokenizers` (with potential loss of performance, and breaking backward compatibility)
- Doesn't imply `datasets` running a first pass of the loop
- Doesn't imply `datasets` looking at the map function itself
- Uses a sound `hash` for this object in `datasets`.
is IIRC impossible for this use case.
I can explain a bit more why the first option is not desirable.
In order to "fix" this for tokenizers, we would need to make `tokenizer(..)` purely without side effects. This means that the "options" of tokenization (like `truncation` and `padding` at least) would have to be sent every single time to make the function "pure". But it also means that we would need to send every single time a bunch of options from Python to Rust, and that boundary is not zero-cost. The cost hopefully would be minimal, but it could prove to be high (Python GIL is a tricky beast).
The other thing, is that it would force `tokenizers` library to behave differently for a `datasets` specific use-case which is less than ideal.
For the datasets specific solution I am not 100% I can explain them properly.<|||||>@Narsil Thank you for your detailed explanation. @lhoestq Can you take a look if there is some specific solution on Datasets<|||||>Yes I think we can have a workaround: you have to reload your tokenizer before your second `map` call.
Note that we still need to fix this issue in `datasets` first: https://github.com/huggingface/datasets/issues/3847<|||||>Good to know. @Narsil So I think this issue can close now. 😄<|||||>I'll follow the discussion over there then.<|||||>Btw @Narsil what's the attribute of the tokenizer we would need to ignore to have a hash of the tokenizer that doesn't depend on the state ? We could implement a custom pickling on the `datasets` side only<|||||>the python logic is there: https://github.com/huggingface/transformers/blob/main/src/transformers/tokenization_utils_fast.py#L354
`tokenizer._tokenizer.{truncation,padding}` it seems.
I don't think there are others. However this might affect `tokenizer._tokenizer` global hash too.
|
transformers | 14,930 | closed | fix to issue #14833 in data_collator - consider no labels | # What does this PR do?
Fixing `DataCollatorForSeq2Seq.__call__` to not preparing decoder_input_ids when `labels` is None.
Fixes # (issue)
#14833
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patil-suraj
| 12-26-2021 10:44:42 | 12-26-2021 10:44:42 | |
transformers | 14,929 | closed | VQA model inferences | There are a bunch of models fine tuned for VQA and NLVR tasks based on LXMERT, ViLT and Visual BERT, CLIP vision BERT and so on. Is there a resource to see how to run inferences on them ? (and possibly benchmark them ) | 12-26-2021 02:25:39 | 12-26-2021 02:25:39 | I tried the LXMERT and VisualBERT demos from [here](https://github.com/huggingface/transformers/blob/master/examples/research_projects/lxmert/demo.ipynb) and [here](https://github.com/huggingface/transformers/blob/master/examples/research_projects/visual_bert/demo.ipynb)
There seems to be dependency issues and a mismatch with tokeniser version in the Req file. Tried fixing it with no luck.
Question: We need FRCNN models for LXMERT and visual BERT as both uses RoIs as a supervision signal along with text encoder ?
Does that mean ViLT based VQA won't need FRCNN as they don't use ROI based supervision ?
Please clarify <|||||>Hi,
I'm working on adding the ViLT model, see #14895. It will be much easier to use compared to LXMERT and VisualBERT. It indeed doesn't require an external model like Faster R-CNN. Instead, it just turns an image into a sequence of patches, which are fed to the model (similar to ViT). <|||||>Awesome thanks ! @patil-suraj was mentioning you are working on this. Closing this for now.
Looking forward to it ! |
transformers | 14,928 | closed | [WIP] Fast tokenizer for debertaV2 | # What does this PR do?
Implements a fast tokenizer for deberta v2. Loosely based on #11387
Fixes #11529
Fixes #14712
This is a draft as there are some failing tests (not super clear to me why atm, will have to investigate)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik
| 12-26-2021 00:56:42 | 12-26-2021 00:56:42 | I noticed that while I was working on my PR, another was submitted for the same purpose: #14923. <|||||>Hey @alcinos ,
thanks for adding it!
I'm currently running comparisons between slow and fast tokenizer. Here are some mismatches between fast and slow.
I just run tokenization tests on `README.md` and `README_zh-hans.md` from official Transformers library, using this script:
```python
import sys
from transformers import DebertaV2Tokenizer, DebertaV2TokenizerFast
model_name = "microsoft/deberta-v2-xlarge"
slow_tokenizer = DebertaV2Tokenizer.from_pretrained(model_name)
fast_tokenizer = DebertaV2TokenizerFast.from_pretrained(model_name)
filename = sys.argv[1]
with open(filename, "rt") as f_p:
for line in f_p:
line = line.rstrip()
if not line:
continue
slow_tokens = slow_tokenizer.tokenize(line)
fast_tokens = fast_tokenizer.tokenize(line)
if slow_tokens != fast_tokens:
print("Tokenization mismatch:", line)
print("Slow tokens:", slow_tokens)
print("Fast tokens:", fast_tokens)
```
Here are some mismatches:
Original input: `* 🖼️ Images, for tasks like image classification, object detection, and segmentation.`
Slow tokens: `['▁*', '▁', '[UNK]', '️', '▁Images', ',', '▁for', '▁tasks', '▁like', '▁image', '▁classification', ',', '▁object', '▁detection', ',', '▁and', '▁segmentation', '.']`
Fast tokens: `['▁*', '▁', '🖼', '️', '▁Images', ',', '▁for', '▁tasks', '▁like', '▁image', '▁classification', ',', '▁object', '▁detection', ',', '▁and', '▁segmentation', '.']`
Another example on `README_zh-hans.md`:
Original input: `- 对教学和实践友好且低门槛`
Slow tokens: `['▁-', '▁', '对', '教', '学', '和', '实', '践', '友', '好', '且', '低', '门', '[UNK]']`
Fast tokens: `['▁-', '▁', '对', '教', '学', '和', '实', '践', '友', '好', '且', '低', '门', '槛']`
The original DeBERTa tokenizer outputs the same tokens as the slow tokenizer.<|||||>@stefan-it Thanks for looking into this and providing the testcases.
It seems that the issues you are reporting are all related to unknown tokens? I don’t know the rust implementation well enough, is there any reason why the fast tokenizer would not respect the vocabulary?<|||||>Hey @alcinos I'm currently trying to figure it out :)<|||||>Good news: when using `encode`, there's no mismatch between slow and fast tokenizer.
For slow tokenizer, this is happening here:
https://github.com/huggingface/transformers/blob/501307b58bdc2db1b6a25271a3f60975130f1c6c/src/transformers/models/deberta_v2/tokenization_deberta_v2.py#L326-L330
Also when using normal T5 (Slow and fast) there are no UNKs when using the `tokenize` function (but `encode` shows that those subtokens are UNKs) so this is DeBERTa-specific.<|||||>@alcinos I think the issue has something regarding the tokenize function inherited from `PreTrainedTokenizerFast`
For this line: `* 🖼️ Images, for tasks like image classification, object detection, and segmentation.`
the `tokenize` function called`encode_plus` returns a Dict[List] converted from BatchEncoding. For both slow and fast tokenizers `encode_plus` return ```{'input_ids': [943, 250, 3, 28596, 7654, 6, 14, 2930, 72, 812, 8692, 6, 2328, 5563, 6, 7, 27235], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}```
https://github.com/huggingface/transformers/blob/501307b58bdc2db1b6a25271a3f60975130f1c6c/src/transformers/tokenization_utils_fast.py#L316-L317
but the .tokens() at the end of the line doesn't return the [UNK] token but rather the token itself which causes the discrepancy here.
May not be entirely correct - but I was able to fix the discrepancy in the testing script @stefan-it provided by overriding the tokenize function and adding this method in Debertav2TokenizerFast class
```
def tokenize(self, text: str, pair: Optional[str] = None, add_special_tokens: bool = False, **kwargs) -> List[str]:
enc = self.encode_plus(text=text, text_pair=pair, add_special_tokens=add_special_tokens, **kwargs)
return self.convert_ids_to_tokens(enc['input_ids'])
```
But the other tests are still failing and I'm not sure what's causing the issue and need to investigate.
<|||||>Thanks @stefan-it and @mingboiz for looking into the tokenization issue. If I summarize the findings so far:
- Slow tokenizer replaces unknown tokens with "[UNK]" while fast tokenizer doesn’t
- This behavior seems specific to Deberta, as T5’s tokenizers don’t replace with "[UNK]"
- After encoding, the results are the same for slow and fast, meaning that the issue is probably minor
- @mingboiz found a way to have the fast tokenizer spit out the "[UNK]".
I’m not sure what the expected behavior should be, nor whether we should be concerned about this in the first place. Input from someone from HF would be appreciated :) (ping @SaulLu )
Aside from that, I pushed some fixes, more tests are passing.
Some feedback for the HF team on the issue I ran into:
In one of the common tests, the code looks for a "do_lower_case" attribute variable:
https://github.com/huggingface/transformers/blob/10fd4fa1a6ca08b6bba5fed2db666a55239d717c/tests/test_tokenization_common.py#L626-L627
This is problematic in my opinion since:
- I didn’t see any mention that this attribute variable is required in the documentation I’ve come across (though I may have missed it)
- The code silently fails if the attribute is not present
- This argument itself is not used anywhere in the DebertaV2TokenizeFast class, hence I was not naturally inclined to add it as an attribute.
I would suggest one of the following change to make this more dev friendly:
- Make it a hard requirement that any tokenizer class must have this attribute, document it, and remove the silent fail in case it’s not found
- Additionally, IMHO this would be better suited as a an overridable getter method rather than a direct access to a private attribute
More tests are failing: one most likely has the same root cause as the issue raised by @stefan-it. The others seem to be failing because in some code paths the vocab_file is None, but it’s not clear to me why that happens, any help on that appreciated.<|||||>@alcinos I can't figure out a solution yet but the tests that are failing because of the missing vocab file which I think it's because in all of them legacy_format=False is being selected
https://github.com/huggingface/transformers/blob/f80775df2b418716acce76d56826ed59183869b6/tests/test_tokenization_common.py#L3513
which only saves using the Rust Tokenizer these files without the `spm.model` vocab file:
```
tokenizer_config.json',
special_tokens_map.json',
tokenizer.json'
```
this code chunk will run instead without using the save_vocabulary function:
https://github.com/huggingface/transformers/blob/f80775df2b418716acce76d56826ed59183869b6/src/transformers/tokenization_utils_fast.py#L578-L583
Debertav2Tokenizer didn't have this issue because its backend SPMTokenizer class provided its own `save_pretrained` method to save the `spm.model`, but I can't figure out why the AlbertTokenizerFast tests could work and passes all tests when the same tests fails here - which I think these are the only remaining failing tests:
- test_saving_tokenizer_trainer
- test_training_new_tokenizer_with_special_tokens_change
- test_training_new_tokenizer<|||||>I have noted the ping! I'll come back to you as soon as possible on this subject because the choice to be made here is not obvious: you have highlighted that the `tokenize` method of `DebertaV2Tokenizer` does not behave in the same way as all the tokenizer methods of the fast tokenizers. At the moment it would make more sense to me to modify the `DebertaV2Tokenizer`'s `tokenize` method, but in general we don't really like to introduce backward incompatibility, so I'll need to discuss it with the maintainers :slightly_smiling_face:.<|||||>@alcinos and @mingboiz , while investigating the `test_saving_tokenizer_trainer` test further, I noticed that the variable `VOCAB_FILES_NAMES` did not specify the `"tokenizer_file"` value (and we need it for the fast version of the tokenizer). Moreover, for the test to fully succeed, the following lines must also be removed from the `tokenization_deberta_v2_fast.py` file.
```
if not os.path.isfile(vocab_file):
raise ValueError(
f"Can't find a vocabulary file at path '{vocab_file}'. To load the vocabulary from a Google pretrained "
"model use `tokenizer = AutoTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)`"
)
```
With these 2 changes, the test now pass :smile: !
I have opened a [PR](https://github.com/alcinos/transformers/pull/1/files) here to show you the changes that should be made to solve these problems. Feel free to merge it if you agree with it. :slightly_smiling_face: <|||||>@alcinos could you please have a look at the https://github.com/alcinos/transformers/pull/1 PR - I think it is ready then :hugs: <|||||>Hi @alcinos, thank you very much for your work, the addition seems to be near the end! Please let me know if you need help with any of it!<|||||>Finished in PR #15529
Thanks all again for the contribution :hugs: |
transformers | 14,926 | closed | Facing Problems with RobertaForSequenceClassification.from_pretrained() | I've been trying to run the code present here on new data - https://github.com/machelreid/lewis
In one of the steps there are the following lines of codes -
```
classifier = (
RobertaForSequenceClassification.from_pretrained(
f"{args.hf_dump}/pytorch_model.bin",
config=f"{args.hf_dump}/config.json",
output_attentions=True,
)
.half()
.cuda()
.eval()
)
```
They are present inside https://github.com/machelreid/lewis/blob/master/get_synthesized_data.py
I get this error -
``Traceback (most recent call last):
File "get_synthesized_data.py", line 56, in <module>
output_attentions=True,
File "/home/aflah20082/yes/envs/lewis/lib/python3.7/site-packages/transformers/modeling_utils.py", line 1272, in from_pretrained
**kwargs,
File "/home/aflah20082/yes/envs/lewis/lib/python3.7/site-packages/transformers/configuration_utils.py", line 501, in from_pretrained
config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/home/aflah20082/yes/envs/lewis/lib/python3.7/site-packages/transformers/configuration_utils.py", line 554, in get_config_dict
local_files_only=local_files_only,
File "/home/aflah20082/yes/envs/lewis/lib/python3.7/site-packages/transformers/configuration_utils.py", line 842, in get_configuration_file
path_or_repo, revision=revision, use_auth_token=use_auth_token, local_files_only=local_files_only
File "/home/aflah20082/yes/envs/lewis/lib/python3.7/site-packages/transformers/file_utils.py", line 1952, in get_list_of_files
return list_repo_files(path_or_repo, revision=revision, token=token)
File "/home/aflah20082/yes/envs/lewis/lib/python3.7/site-packages/huggingface_hub/hf_api.py", line 603, in list_repo_files
repo_id, revision=revision, token=token, timeout=timeout
File "/home/aflah20082/yes/envs/lewis/lib/python3.7/site-packages/huggingface_hub/hf_api.py", line 586, in model_info
r.raise_for_status()
File "/home/aflah20082/yes/envs/lewis/lib/python3.7/site-packages/requests/models.py", line 953, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/models/downloads/roberta-classifier/dataset-eval/config.json``
I can't seem to figure out why does it try to go to that link since the path was to a local file
| 12-25-2021 09:19:55 | 12-25-2021 09:19:55 | I figured out my mistake the config file hadn't been generated
Fixed that now |
transformers | 14,923 | closed | [WIP] DeBERTav2 Fast Tokenizer - fixes #14712 | # What does this PR do?
Added DeBERTav2Converter for tokenizer of
- DeBERTav2
- DeBERTav3
- mDeBERTav3
Wish to add a DeBERTav2TokenizerFast classes and test but require guidance on this, thanks!
Fixes # (issue)
#14712
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
https://github.com/huggingface/transformers/issues/14712
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@SaulLu @LysandreJik
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
| 12-25-2021 04:44:27 | 12-25-2021 04:44:27 | Merry xmas! Currently using the Converter to instantize the Fast DebertaV2 Tokenizer from a PreTrainedTokenizerFast serves as a workaround for now, but I'm finding it difficult to write up the DebertaV2TokenizerFast class
for `tokenization_deberta_v2_fast.py` such that it wraps the SPMTokenizer class.
An alternative approach I want to try is where it's based on PreTrainedtokenizerFast instead of SPMTokenizer but I'm not sure how to start putting this together. Would appreciate any guidance, thank you!<|||||>Closing this as @alcinos PR is more feature rich than mine, and I would wish to contribute to his efforts on his PR instead, thanks!<|||||>Hi @mingboiz , first of all thank you very much for offering to work on this issue! It is very much appreciated. :hugs:
I'll let you coordinate directly with @alcinos if you want to work together on it? :smile: |
transformers | 14,922 | closed | Using Huggingface Trainer in Colab -> Disk Full | https://discuss.huggingface.co/t/using-huggingface-trainer-in-colab-disk-full/5951/2
I faced the same issue mentioned in this discussion. Please give it a look.
[## Environment info
- `transformers` version: 4.15.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyTorch version (GPU?): 1.10.0+cu111 (True)
- Tensorflow version (GPU?): 2.7.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
- Trainer: @sgugger
## Information
This issue is model agnostic
I believe it's task agnostic too
## To reproduce
Steps to reproduce the behavior:
1. Run this notebook in colab
https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb#scrollTo=YpvnFFmZJD-N
## Expected behavior
1. Disk should not go out of space, Because I used `save_total_limit=1` in the training argument. So previous checkpoints should be deleted
| 12-25-2021 01:18:14 | 12-25-2021 01:18:14 | I guess I found the reason,
on deleting the previous checkpoint, it goes to the google drive bin and the bin does not delete it then (deletes after 30 days) and this results in occupied space.
One way to solve this will be overriding the `_rotate_checkpoint` method of the `Trainer` to also clean the drive bin, but this would require authentication from the user every time to empty the bin.
Any idea around implementing the same?
<|||||>Looks like an issue on Colab that they don't let the user choose to delete things without going in that bin (or clear that bin when space is needed).
The only workaround I can think of is to use a very high value as a saving step, or to disable the saving altogether during training with `save_strategy="no"`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>
> Looks like an issue on Colab that they don't let the user choose to delete things without going in that bin (or clear that bin when space is needed).
>
> The only workaround I can think of is to use a very high value as a saving step, or to disable the saving altogether during training with `save_strategy="no"`.
Still facing the same issue.
+Having save_strategy = 'no' means that we are not able to pick the best model based on our evaluation metric :(
<|||||>You can raise the issue on Colab. There is really nothing we can do on our side since the problem is that Colab does not let us delete older checkpoints. |
transformers | 14,921 | closed | Inference API: return token scores for text-generation models | # 🚀 Feature request
It would be useful if the Accelerated Inference API returned token scores assigned by Text-generation and Text2text-generation models, similar to what is returned by "Fill mask" models. E.g.:
```
output = {
"sequence": "the answer to the universe is no.",
"score": 0.1696,
"token": 2053,
"token_str": "no",
},
{
"sequence": "the answer to the universe is nothing.",
"score": 0.0734,
"token": 2498,
"token_str": "nothing",
},
...
``` | 12-24-2021 18:22:41 | 12-24-2021 18:22:41 | cc @Narsil <|||||>Hi @rodrigonogueira4 ,
This is an interesting feature. May I ask what the end goal would be ?
Is it to potentially discard results which are deemed too low ?
Is it to show some kind of confidence (color or otherwise) to your user ?
Any other usage ?
I think clarifying the end usage is really important in this case, since the concept of `score` in `text-generation` is not as straightforward as it may look. If we can pinpoint the core of the issue, it's going to be easier than adding a score without thinking about how it's going to be used.
Elements of reflection:
- Generation, applies between 1 and N (`max_new_tokens` for instance) times the model, meaning we have logits for each `token`, which is not necessarily a word. If the API started returning something like this it would mean we would have to return tokens individually, `[ {"token": "Some", "score": 0.9}, {"token": "##thing", score: 0.99}]` for instance. There are ways to make this work, but not all tokens are readable ([EOS] for instance doesn't print anything, it just means the sequence is ended). Making sure `"".join(tok["token"] for tok in tokens) == original_output` might not be trivial (would have to check, but I don't think we ever provide that guarantee on arbitrary tokenizer, which is why we usually output offsets instead)
- It would need to work, with more complex examples, like `beam_search`, where we return several outputs, each with their own tokens + scores.
The main caveat with that solution I can think of is that as a user, it forces to think about what a token is, and you probably don't want to cut predictions mid-sentence (if you're cutting based on score), so then we need to output words (which not all tokenizers provide). Also low score != bad prediction (The ocean is ["large", "blue"] are both equally valid, meaning probs = [0.5, 0.5] even if the confidence for the model is super high.)
Other option would be to output a single score per generation, but then which score do we return is the product of all probabilities the correct "score" ? This is what is used during beam_search if I am not mistaken, but that number is vanishingly small so using it as a user with a threshold is likely to cause issues as the reference gets smaller as tokens are added. Normalizing this score based on length doesn't seem desirable/feasible ?
Maybe a third option ?<|||||>Hi @Narsil,
> This is an interesting feature. May I ask what the end goal would be ?
> Is it to potentially discard results which are deemed too low ?
> Is it to show some kind of confidence (color or otherwise) to your user ?
> Any other usage ?
That's right, these are the use cases I had in mind. In my experience with the GPT-3's API, filtering out outputs using the model's likelihood improves overall results quite a bit.
Returning the probability of the token that was selected by the decoding algorithm (greedy or top-p) at each decoding step is enough to calculate a confidence score for the generated sentence.
Returning scores for beam search would be more complicated. Open AI solved this by returning the top N tokens and their probs at each decoding step.
> The main caveat with that solution I can think of is that as a user, it forces to think about what a token is, and you probably don't want to cut predictions mid-sentence (if you're cutting based on score), so then we need to output words (which not all tokenizers provide). Also low score != bad prediction (The ocean is ["large", "blue"] are both equally valid, meaning probs = [0.5, 0.5] even if the confidence for the model is super high.)
I think that's fine as long as we can compute the perplexity or the likelihood of the entire sequence.
> Other option would be to output a single score per generation, but then which score do we return is the product of all probabilities the correct "score" ? This is what is used during beam_search if I am not mistaken, but that number is vanishingly small so using it as a user with a threshold is likely to cause issues as the reference gets smaller as tokens are added. Normalizing this score based on length doesn't seem desirable/feasible ?
That would work, as long as the user also knows the number of tokens in the generated sequence, so we could normalize by length (by exponentiating to 1/length).
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,920 | closed | Use tqdm.auto in Pipeline docs | It's better for e.g. notebook. | 12-24-2021 16:41:10 | 12-24-2021 16:41:10 | Thanks again! |
transformers | 14,919 | closed | Multiprocessing for pipeline | I am using the question-answering pipeline provided by huggingface. I am trying to perform multiprocessing to parallelize the question answering, but it stucks!!! Eexcution does not end.
from transformers import pipeline
from torch.multiprocessing import Pool, Process, set_start_method
set_start_method('spawn', force = True)
model_name = "deepset/roberta-base-squad2"
reader = pipeline('question-answering', model=model_name, tokenizer=model_name, device = -1)
def get_answer(input_dict):
return reader(input_dict)
input_list = []
for i in range(3):
QA_input = {
'question': val_questions[i],
'context': val_contexts[i]
}
input_list.append(QA_input)
if __name__ == '__main__':
result =[]
multi_pool = Pool(processes=3)
predictions = multi_pool.map(get_answer, input_list)
multi_pool.close()
multi_pool.join()
print(predictions) | 12-24-2021 10:42:48 | 12-24-2021 10:42:48 | same problem! @sharejing any insight so far?
I was trying to do something like
```
# this is required otherwise it will complain "AssertionError: daemonic processes are not allowed to have children"
class NoDaemonProcess(multiprocessing.Process):
@property
def daemon(self):
return False
@daemon.setter
def daemon(self, value):
pass
class NoDaemonContext(type(multiprocessing.get_context("fork"))):
Process = NoDaemonProcess
from transformers import pipeline
model_name = "deepset/roberta-base-squad2"
reader = pipeline('question-answering', model=model_name, tokenizer=model_name, device = -1)
def func(input_dict):
reader(input_dict)
mp_context = NoDaemonContext()
pool = mp_context.Pool(multiprocessing.cpu_count())
async_res = pool.apply_async(func, (input_dict))
async_res.wait()
res = async_res.get()
```
and it got stuck on the `async_res.wait()` line. <|||||>@sharejing
This code seems to work
```python
from transformers import pipeline
from torch.multiprocessing import Pool, Process, set_start_method
set_start_method("spawn", force=True)
model_name = "deepset/roberta-base-squad2"
reader = pipeline("question-answering", model=model_name, tokenizer=model_name, device=-1)
def get_answer(input_dict):
print("Input", input_dict)
return reader(input_dict)
input_list = []
for i in range(3):
QA_input = {"question": "This is a test", "context": "This is a context"}
input_list.append(QA_input)
if __name__ == "__main__":
result = []
multi_pool = Pool(processes=3)
predictions = multi_pool.map(get_answer, input_list)
multi_pool.close()
multi_pool.join()
print(predictions)
```
However, I would suggest that instead of sharing `pipelines` (and so `models`) across processes you load them on each thread instead since it will prevent any error on sharing the model across processes (which for instance seems impossible with TF).
```python
from transformers import pipeline
from torch.multiprocessing import Pool, Process, set_start_method
set_start_method("spawn", force=True)
model_name = "deepset/roberta-base-squad2"
PIPE = None
def get_pipe():
# This will load the pipeline on demand on the current PROCESS/THREAD.
# And load it only once.
global PIPE
if PIPE is None:
PIPE = pipeline("question-answering", model=model_name, tokenizer=model_name, device=-1)
return PIPE
def get_answer(input_dict):
reader = get_pipe()
print("Input", input_dict)
return reader(input_dict)
input_list = []
for i in range(3):
QA_input = {"question": "This is a test", "context": "This is a context"}
input_list.append(QA_input)
if __name__ == "__main__":
result = []
multi_pool = Pool(processes=3)
predictions = multi_pool.map(get_answer, input_list)
multi_pool.close()
multi_pool.join()
print(predictions)
```
```<|||||>Other linked issue :https://github.com/huggingface/transformers/issues/15038<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,918 | closed | Why I met Type 'seq(tensor(int64))' of operator (MemcpyFromHost) is invalid when using onnxruntime.InferenceSession() in GPU, and How to resolve it? On emergency hold,thanks! | ## Environment info
when I run exported onnx model of transformers (BARTBeamSearchGenerator model example) on GPU. I met this. Who knows how can resolve it?
<img width="1392" alt="截屏2021-12-24 下午3 32 55" src="https://user-images.githubusercontent.com/13781668/147331435-f518d5c7-b6fd-4550-9de9-940a9ec87003.png">
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):1.10.1
- Tensorflow version (GPU?):
- Using GPU in script?:YES
- Onnxruntime version: 1.8.0
- onnx version: 1.10.0
| 12-24-2021 07:58:20 | 12-24-2021 07:58:20 | Hi @yuanhuachao thanks for raising this issue! Can you please provide the exact command you used to export the model with? You might also be interested in a related issue #14882 about running this example script on a GPU.<|||||>Hi @lewtun ,i run the run_onnx_exporter.py with command 'python run_onnx_exporter.py --model_name_or_path facebook/bart-base --device=cuda', in the directory of 'transformers/examples/onnx/pytorch/summarization'. And change the way to create onnxruntime.InferenceSession with CUDAExecutionProvider,( 'ort_sess = onnxruntime.InferenceSession(new_onnx_file_path, providers=['CUDAExecutionProvider']))' <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,917 | closed | Fix Perceiver docs | null | 12-24-2021 07:37:04 | 12-24-2021 07:37:04 | |
transformers | 14,916 | closed | Add Deepspeed Transformer kernel, but encounter error IndexError: tuple index out of range | I have compare the Bert-base pretraining performance between [Nvidia Tensorflow ](https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow/LanguageModeling/BERT) and huggingface + Deepspeed.
My environment:
cuda: 11.2
torch: 1.10
deepspeed: 0.5.8
transformers: 4.13.0.dev0
machine: single node with 8 A100
The result show that huggingface are 20% slower than TF while XLA is enable.
I know that huggingface does not support deepspeed's transformer kernel, so I manually replaced it, the code is as follows:
```
import time
import code
import logging
import sys
import os
from typing import Optional
import glob
import datasets
from dataclasses import dataclass, field
import transformers
from transformers.integrations import TensorBoardCallback
from datasets import concatenate_datasets
from deepspeed.ops.transformer import (
DeepSpeedTransformerConfig,
DeepSpeedTransformerLayer
)
logger = logging.getLogger(__name__)
datasets.set_caching_enabled(False)
@dataclass
class CustomArguments:
model_name: Optional[str] = field(default="bert-base-uncased")
model_path: Optional[str] = field(default=None)
train_data_dir: Optional[str] = field(default=None)
validation_file: Optional[str] = field(default=None)
max_seq_length: Optional[int] = field(default=128)
preprocessing_num_worker: Optional[int] = field(default=os.cpu_count())
use_pretrain_collator: Optional[bool] = field(default=True)
line_by_line: Optional[bool] = field(default=True)
vocab_path: Optional[str] = field(default=None)
model_conf_path: Optional[str] = field(default=None)
def gen_ds_bert_config(training_args, config):
bert_config = DeepSpeedTransformerConfig(
batch_size=384,
hidden_size=config.hidden_size,
intermediate_size=config.intermediate_size,
heads=config.num_attention_heads,
attn_dropout_ratio=config.attention_probs_dropout_prob,
hidden_dropout_ratio=config.hidden_dropout_prob,
num_hidden_layers=config.num_hidden_layers,
initializer_range=0.02,
layer_norm_eps=1e-8,
local_rank=training_args.local_rank,
fp16=training_args.fp16,
pre_layer_norm=False,
training=True
)
return bert_config
def inject_ds_enc_layer(model, training_args, config):
for i in range(config.num_hidden_layers):
bert_config = gen_ds_bert_config(training_args, config)
model.bert.encoder.layer[i] = DeepSpeedTransformerLayer(bert_config)
def main():
parser = transformers.HfArgumentParser((CustomArguments, transformers.TrainingArguments))
custom_args, training_args = parser.parse_args_into_dataclasses()
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
handlers=[logging.StreamHandler(sys.stdout)],
)
log_level = training_args.get_process_log_level()
logger.setLevel(log_level)
datasets.utils.logging.set_verbosity(log_level)
transformers.utils.logging.set_verbosity(log_level)
transformers.utils.logging.enable_default_handler()
transformers.utils.logging.enable_explicit_format()
logger.warning(
f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}"
+ f"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}"
)
# Set the verbosity to info of the Transformers logger (on main process only):
logger.info(f"Training/evaluation parameters {training_args}")
tokenizer = transformers.BertTokenizer(
vocab_file=custom_args.vocab_path,
do_lower_case=False,
max_length=custom_args.max_seq_length)
model_config = transformers.BertConfig.from_pretrained(
custom_args.model_conf_path)
if custom_args.model_path is not None:
print(f"training continue from: f{custom_args.model_path}!")
model = transformers.BertForPreTraining.from_pretrained(custom_args.model_path)
elif custom_args.model_conf_path:
print(f"training from scratch!")
model = transformers.BertForPreTraining(config=model_config)
else:
raise ValueError("no model config or model path provied, please check!")
model.resize_token_embeddings(len(tokenizer))
inject_ds_enc_layer(model, training_args, model_config)
if custom_args.train_data_dir is None:
raise ValueError("train_data_dir must be specified!")
print("start loading data!")
start = time.time()
data_files = glob.glob(custom_args.train_data_dir.rstrip("/") + "/*")
data = [datasets.load_from_disk(data_file) for data_file in data_files]
train_dataset = concatenate_datasets(data)
train_dataset = train_dataset.shuffle()
end = time.time()
print(f"loading data cost: {end - start} s")
data_collator = None
if custom_args.use_pretrain_collator:
data_collator = transformers.DataCollatorForLanguageModeling(
tokenizer=tokenizer,
mlm_probability=0.15,
mlm=True,
# pad_to_multiple_of=8
)
code.interact(local=locals())
trainer = transformers.Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=None,
tokenizer=tokenizer,
data_collator=data_collator,
callbacks=[TensorBoardCallback]
)
trainer.train()
if __name__ == "__main__":
main()
```
But the error occur:
```
File "/usr/local/anaconda3/lib/python3.7/site-packages/deepspeed/ops/transformer/transformer.py", line 607, in forward
self.config)
File "/usr/local/anaconda3/lib/python3.7/site-packages/deepspeed/ops/transformer/transformer.py", line 179, in forward
if inp_size[1] % 16 != 0:
IndexError: tuple index out of range
...
RuntimeError: CUDA error: an illegal memory access was encountered
```
It seem the input dim is not correctly.
Did I do something wrong, hope to help me point it out
| 12-24-2021 07:30:28 | 12-24-2021 07:30:28 | By the way, does Huggingface support similar XLA or kernel fusion capabilities?<|||||>Ok, problem solved, it's because the origin huggingface bert will return a tuple and choose the first element while deepspeed return the element<|||||>you can enable return_tuple in DeepSpeedTransformerConfig
|
transformers | 14,915 | closed | ConnectionError | I encounterd the issue:ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.16.1/datasets/common_voice/common_voice.py when run run_speech_recognition_ctc.py. How can I solve this problem?
| 12-24-2021 04:24:18 | 12-24-2021 04:24:18 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,914 | closed | How to update config after model inited? | Hello everyone, I want to **change** the config [such as `dropout probability`] after bert inited, what should I do in transformers?
```python
# fixed seed
random.seed(args.seed)
torch.manual_seed(args.seed)
cudnn.deterministic = True
# init bert
config = AutoConfig.from_pretrained(model_name, hidden_dropout_prob=0.2, attention_probs_dropout_prob=0.2)
bert = BertModel.from_pretrained(model_name, config=config)
# then I want to change dropout_prob to 0 to run some other logits, but it seems invalid
bert.config.hidden_dropout_prob = 0
bert.config.attention_probs_dropout_prob = 0
# because out1 doesn't equals to out2 when I print them, but they should be the same when the dropout prob is 0
out1 = bert(**embeddings)
out2 = bert(**embeddings)
# out1 != out2
``` | 12-24-2021 03:56:35 | 12-24-2021 03:56:35 | Hi,
To turn off dropout, you should put your model in evaluation mode: `model.eval()`. <|||||> > To turn off dropout, you should put your model in evaluation mode: `model.eval()`.
thanks for reply, due to certain needs, what I want is to change `drop prob` during training. Can this be achieved?
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,913 | closed | [Benchmark] Deepspeed +fp16/bf16 on a 8xA100 node | # 🖥 Benchmarking `transformers`
## Benchmark
Which part of `transformers` did you benchmark?
`Deepspeed` with template `Zero 1, 2 and 3` configurations using fp16 and bf16.
- I am by no means an expert on this, I'm trying to find the fastest configuration for my setup. So if you see better ways to do this, please let me know.
- I have access to more nodes, but somehow when running on multinode `deepspeed` does not report percentages of completion nor times estimations. If there is a way to do this, please let me know and I'll extend it to 4 x (8xA100)
## Set-up
What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use?
My system:
```
torch: 1.10.0+cu113
transformers: 4.14.1
deepspeed: 0.5.8
```
The command is always:
`
deepspeed 5.run_clm-post.py --model_name_or_path /path/to/gpt2-large/ --train_file sample.txt --tokenizer_name embeddings--do_train --do_eval --output_dir ./output --evaluation_strategy steps --eval_steps 1000 --save_steps 1000 --num_train_epochs 12 --per_device_train_batch_size 8 --cache_dir .cache2/ --save_total_limit 2 --dataloader_drop_last True --learning_rate 1e-06 `
And then I add:
--deepspeed config1.json --fp16
--deepspeed config2.json --fp16
--deepspeed config3.json --fp16
--deepspeed config_2.json --fp16
Where the config files are:
config1.json:
```
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto"}
```
config2.json:
```
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 100,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
config3.json:
```
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_fp16_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
Then config_2.json is the same as the above config2 but replacing the fp16 part with:
```
"bfloat16": {
"enabled": true
}
```
## Results
| ----------- | fp16 | bf16 |
| ----------- | ----------- | ----------- |
| deepspeed 1 | **2.28it/s** | - |
| deepspeed 2 | 4.59 s/it | 4.90 s/it |
| deepspeed 3 | 5.02 s/it | - |
Somehow the units in the fp16 -deepspeed 1 case are returned in it/s, so for the sake of comparison that would translate to **0.43 s/it**. I am puzzled by the results, because I'd expect zero 2 and 3 to work faster, but zero 1 turned to be around 10 times faster. So let me know if I am doing anything wrong. Also, let me know how could I extend to multi-node -if it is interesting for somebody else-
Thanks
| 12-23-2021 21:22:28 | 12-23-2021 21:22:28 | oh, and tagging @stas00 because is a `deepspeed` "issue".<|||||>Information about the cards:
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 495.29.05 Driver Version: 495.29.05 CUDA Version: 11.5 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA A100-SXM... On | 00000000:0E:00.0 Off | 0 |
| N/A 34C P0 56W / 400W | 0MiB / 40536MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 1 NVIDIA A100-SXM... On | 00000000:13:00.0 Off | 0 |
| N/A 33C P0 54W / 400W | 0MiB / 40536MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 2 NVIDIA A100-SXM... On | 00000000:49:00.0 Off | 0 |
| N/A 31C P0 53W / 400W | 0MiB / 40536MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 3 NVIDIA A100-SXM... On | 00000000:4F:00.0 Off | 0 |
| N/A 34C P0 54W / 400W | 0MiB / 40536MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 4 NVIDIA A100-SXM... On | 00000000:90:00.0 Off | 0 |
| N/A 34C P0 57W / 400W | 0MiB / 40536MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 5 NVIDIA A100-SXM... On | 00000000:96:00.0 Off | 0 |
| N/A 31C P0 52W / 400W | 0MiB / 40536MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 6 NVIDIA A100-SXM... On | 00000000:CC:00.0 Off | 0 |
| N/A 33C P0 56W / 400W | 0MiB / 40536MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 7 NVIDIA A100-SXM... On | 00000000:D1:00.0 Off | 0 |
| N/A 32C P0 56W / 400W | 0MiB / 40536MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
```
```
nvidia-smi topo -m
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 mlx5_0 mlx5_1 mlx5_2 mlx5_3 CPU Affinity NUMA Affinity
GPU0 X NV12 NV12 NV12 NV12 NV12 NV12 NV12 PXB NODE NODE SYS 0-63 0
GPU1 NV12 X NV12 NV12 NV12 NV12 NV12 NV12 PXB NODE NODE SYS 0-63 0
GPU2 NV12 NV12 X NV12 NV12 NV12 NV12 NV12 NODE PXB PXB SYS 0-63 0
GPU3 NV12 NV12 NV12 X NV12 NV12 NV12 NV12 NODE PXB PXB SYS 0-63 0
GPU4 NV12 NV12 NV12 NV12 X NV12 NV12 NV12 SYS SYS SYS NODE 64-127 1
GPU5 NV12 NV12 NV12 NV12 NV12 X NV12 NV12 SYS SYS SYS NODE 64-127 1
GPU6 NV12 NV12 NV12 NV12 NV12 NV12 X NV12 SYS SYS SYS PXB 64-127 1
GPU7 NV12 NV12 NV12 NV12 NV12 NV12 NV12 X SYS SYS SYS PXB 64-127 1
mlx5_0 PXB PXB NODE NODE SYS SYS SYS SYS X NODE NODE SYS
mlx5_1 NODE NODE PXB PXB SYS SYS SYS SYS NODE X PIX SYS
mlx5_2 NODE NODE PXB PXB SYS SYS SYS SYS NODE PIX X SYS
mlx5_3 SYS SYS SYS SYS NODE NODE PXB PXB SYS SYS SYS X
```<|||||>You need to understand how ZeRO stages work and their relative to each other speed:
Z1: **fastest** - only shards optim states
Z2: **fast** - shards optim states + gradients
Z3: **slowest** - as it has to shard optim states + gradients + params
i.e, the more sharding it has to do the slower it becomes as it has to communicate a lot more data between processes.
and of course:
Z0: **super fast** - no ZeRO, no sharding fastest of all of them.
You choose which stage to use depending on your model's size. If you can fit it with a desirable BS on Z0 use that, if you can't next try Z1, then Z2, and only if Z2 is not enough you use Z3.
again Z0 - is no deepspeed.
and in reverse Z3 -> Z2 -> Z1 -> Z0 your memory requirements grow, see:
https://deepspeed.readthedocs.io/en/stable/memory.html
for other options that further save memory beyond Z3.
so it's a trade-off between memory and speed.
----------------
> Somehow the units in the fp16 -deepspeed 1 case are returned in it/s
I'm not sure what you mean, perhaps paste the metrics you're referring to?
e.g. a sample output from HF Trainer:
```
***** train metrics *****
epoch = 1.0
train_loss = 2.418
train_runtime = 0:01:20.80
train_samples = 2500
train_samples_per_second = 30.94
train_steps_per_second = 3.874
```
For benchmark I think samples/sec is the most interesting and consistent, but of course others are fine as well.
e.g. see https://github.com/huggingface/transformers/issues/14608
> Also, let me know how could I extend to multi-node
what do you mean how you could extend this to multi-node, it should just work. And if it doesn't please let us know what specifically doesn't work.
additionally for multi-node benchmark reports please specify the type of inter-connects - Infiniband, OPA, etc., as these make a big difference.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,912 | closed | [doc] install - add link to jax installation | As `jax` cuda requires special instructions to be installed correctly add a link to jax installation instructions.
Note: Flax install page only covers cpu jax installation info.
@sgugger
| 12-23-2021 20:37:26 | 12-23-2021 20:37:26 | |
transformers | 14,911 | closed | How to generate output using custom embeddings? | My code is kinda like
```python
model = BartForConditionalGeneration.from_pretrained('facebook/bart-large')
token_embeds, pos_embeds = custom_embeds()
output = model(inputs_embeds=token_embeds + pos_embeds, decoder_input_ids=torch.tensor([[tokenizer.bos_token_id]]))
```
How do I generate text output from this output? The `model.generate()` function requires `token_ids` so I can't use it. | 12-23-2021 19:40:59 | 12-23-2021 19:40:59 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,910 | closed | [WavLM] fix wavlm docs | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-23-2021 19:29:30 | 12-23-2021 19:29:30 | |
transformers | 14,909 | closed | remove absl workaround as it's no longer needed | the `absl` workaround hasn't been needed since 2019-04 https://github.com/abseil/abseil-py/issues/99 so it should be safe to remove it.
Otherwise it's complaining about not finding TPUs, when there are no TPUs to be found when some of our libs load jax and not needing jax: https://github.com/huggingface/transformers/issues/14907#issuecomment-1000468384
if you feel that almost 3 years is not far enough to safely remove this, I can re-do this to check the explicit version of `absl`.
Fixes: https://github.com/huggingface/transformers/pull/14909
@patil-suraj, @patrickvonplaten, @LysandreJik, @sgugger | 12-23-2021 18:41:42 | 12-23-2021 18:41:42 | actually, that fix was in `abseil` - I may have misread the situation - let me just run some checks on whether the logging works.
OK, here it says that `absl` was fixed as well: https://github.com/tensorflow/tensorflow/issues/26691#issuecomment-525519742
<|||||>Hard to leave a reasonable opinion here for me as I don't know at all why it was added<|||||>Maybe @thomwolf actually remembers?<|||||>It says in the workaround:
Work around to update TensorFlow's absl.logging threshold which alters the default Python logging output behavior when present.
- https://github.com/abseil/abseil-py/issues/99
- https://github.com/tensorflow/tensorflow/issues/26691#issuecomment-500369493
which has been resolved several years ago.<|||||>Note that all of the issues in the TensorFlow side say it has been fixed since 2019 and we have a minimum version required that was released in July 2020, so I see little harm in removing this.
We can always put it back if the logs become horrible but I trust @stas00 to have tested it :-)<|||||>OK, I tried to devise a test based on the comments on when it didn't work:
```
$ python -c "
import logging
import tensorflow as tf
from transformers.utils import logging
import absl.logging
logger = logging.get_logger(__name__)
logger.warning('Hello')
"
Hello
```
so it works. i.e. loading `absl.logging` doesn't interfere with our logging.<|||||>What Patrick said, I also don't know why it was added. But okay to remove for me if removing this won't cause any issues.<|||||>Having the same annoying logs coming from absl as @stas00 each time I use the library, I would really like to move forward with this. It's easy enough to revert if we discover an issue and there is time before the next release, so I'm merging this. Let's keep an eye on any "log" regressions and re-asses if something horrible happens :-)<|||||>Hey @stas00 with one of the recent `absl`-related changes the JAX/FLAX example is heavily spaming the console:

So it seems that a `debug` level is active by default. Do you have any suggestion for a fix :thinking:
(I'm using latest `master` version and a v3-8 TPU VM)<|||||>Hmm, weird, this is the sort of thing that this PR's change was supposed to take care of.
But your `absl` is somehow in DEBUG mode - are you sure you don't load something else that sets it to debug level? That surely shouldn't be the default logging level of `absl`. So in a way the code we removed was masking this issue and thus it doesn't get masked any longer.
Do you have a simple few lines of code I could reproduce this with? Then I can try to tinker with it. (but my setup is gpu only at the moment, though sure it shouldn't make a difference)
<|||||>No problem, here's one example command:
```bash
./run_t5_mlm_flax.py \
--output_dir="./debugging-mt5" \
--model_name_or_path="google/mt5-small" \
--max_seq_length="512" \
--weight_decay="0.01" \
--per_device_train_batch_size="32" \
--per_device_eval_batch_size="8" \
--learning_rate="3e-4" \
--warmup_steps="10000" \
--overwrite_output_dir \
--num_train_epochs="100" \
--adam_beta1="0.9" \
--adam_beta2="0.98" \
--logging_steps="500" \
--save_steps="10000" \
--eval_steps="2500" \
--dataset_name="oscar" \
--dataset_config_name="unshuffled_deduplicated_af" \
--preprocessing_num_workers 16 \
--adafactor
```
Here are the used jax/flax version:
* flax: `0.3.6`
* jax: `0.2.26`
* Transformers/Datasets: latest `master`
Hope that helps :hugs: <|||||>Thank you for the example, @stefan-it
It appears to be a much bigger issue than `absl`. Something turns DEBUG on every component:
```
[13:52:44] - INFO - absl - Unable to initialize backend 'tpu_driver': NOT_FOUND: Unable to find driver in registry given worker:
[13:52:44] - DEBUG - absl - Initializing backend 'gpu'
[13:52:44] - DEBUG - absl - Backend 'gpu' initialized█████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 8.78ba/s]
[13:52:44] - DEBUG - absl - Initializing backend 'tpu'
[13:52:44] - INFO - absl - Unable to initialize backend 'tpu': INVALID_ARGUMENT: TpuPlatform is not available. | 0/1 [00:00<?, ?ba/s]
[13:52:50] - DEBUG - absl - Compiling prim_fun (139939614586432) for args (ShapedArray(int32[]), ShapedArray(int32[])).
[13:52:50] - DEBUG - absl - Compiling prim_fun (139939615649152) for args (ShapedArray(int32[]),).
[13:52:50] - DEBUG - absl - Compiling prim_fun (139939615607552) for args (ShapedArray(uint32[]),).
[13:52:50] - DEBUG - absl - Compiling <lambda> (139939614754432) for args (ShapedArray(int32[]), ShapedArray(uint32[])).
[13:52:50] - DEBUG - absl - Compiling prim_fun (139939627045952) for args (ShapedArray(uint32[1]), ShapedArray(uint32[1])).
[13:52:50] - DEBUG - absl - Compiling _threefry_split (139939614754176) for args (ShapedArray(uint32[2]),).
[13:52:50] - DEBUG - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
[13:52:50] - DEBUG - urllib3.connectionpool - https://huggingface.co:443 "HEAD /google/mt5-small/resolve/main/flax_model.msgpack HTTP/1.1" 302 0
[13:52:50] - DEBUG - filelock - Attempting to acquire lock 139939610592016 on /home/stas/.cache/hugging
```
@patil-suraj, do you know why this is done?<|||||>Ah, it's because logging level wasn't set, this is probably a possible fix:
```
diff --git a/examples/flax/language-modeling/run_t5_mlm_flax.py b/examples/flax/language-modeling/run_t5_mlm_flax.py
index 4a66a3cd5..552ccd5b3 100755
--- a/examples/flax/language-modeling/run_t5_mlm_flax.py
+++ b/examples/flax/language-modeling/run_t5_mlm_flax.py
@@ -492,7 +492,7 @@ def main():
# Setup logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
- level="NOTSET",
+ level="INFO",
datefmt="[%X]",
)
```
same should probably be applied to: `examples/flax/language-modeling/run_mlm_flax.py`
<|||||>Please check that https://github.com/huggingface/transformers/pull/15129 solves the issue for you, @stefan-it |
transformers | 14,908 | closed | Can't load tokenizer for 'microsoft/wavlm-base' when using Wav2Vec2Processor as in docs | # overview
Thanks for adding `WavLM` guys! I wanted to try it out and ran into some issues which I am reporting (hopefully clearly) here.
the transformers docs say to use `Wav2Vec2Processor` as the processor/tokenizer for `WavLM`. Transformers 4.15.0 tells me it can't find a suitable tokenizer when running the following line `processor = Wav2Vec2Processor.from_pretrained('microsoft/wavlm-base')` verbatim from the [docs](https://huggingface.co/docs/transformers/model_doc/wavlm#transformers.WavLMForCTC). I received the error originally when trying to add to [a branch in my repo](https://github.com/pszemraj/vid2cleantxt/tree/model-updates) and replicated the issue [on Google Colab](https://colab.research.google.com/gist/pszemraj/ec39b013e14e5ec6ddcda74cc1741edb/transformers-4-15-0-wavlm-test.ipynb).
## Environment info
- `transformers` version: 4.15.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyTorch version (GPU?): 1.10.0+cu111 (False)
- Tensorflow version (GPU?): 2.7.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
-
### Who can help
@patrickvonplaten , @anton-l, @sgugger
## Information
I am using `wavLM` which seems to inherit some things from `wav2vec2`.
The problem arises when using:
* [x] the official example scripts: (give details below)
The tasks I am working on is:
automatic speech recognition - ASR
## To reproduce
Steps to reproduce the behavior:
- run the code listed under [WavLMForCTC](https://huggingface.co/docs/transformers/model_doc/wavlm#transformers.WavLMForCTC) after a clean install.
- this is replicated in a colab gist [here](https://colab.research.google.com/gist/pszemraj/ec39b013e14e5ec6ddcda74cc1741edb/transformers-4-15-0-wavlm-test.ipynb)
### copy paste of error
in case this saves time:
```
OSError: Can't load tokenizer for 'microsoft/wavlm-base'. Make sure that:
- 'microsoft/wavlm-base' is a correct model identifier listed on 'https://huggingface.co/models'
(make sure 'microsoft/wavlm-base' is not a path to a local directory with something else, in that case)
- or 'microsoft/wavlm-base' is the correct path to a directory containing relevant tokenizer files
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1734 msg += f"- or '{revision}' is a valid git identifier (branch name, a tag name, or a commit id) that exists for this model name as listed on its model page on 'https://huggingface.co/models'\n\n"
1735
-> 1736 raise EnvironmentError(msg)
1737
1738 for file_id, file_path in vocab_files.items():
OSError: Can't load tokenizer for 'microsoft/wavlm-base'. Make sure that:
- 'microsoft/wavlm-base' is a correct model identifier listed on 'https://huggingface.co/models'
(make sure 'microsoft/wavlm-base' is not a path to a local directory with something else, in that case)
- or 'microsoft/wavlm-base' is the correct path to a directory containing relevant tokenizer files
```
## Expected behavior
I want it to load wavLM and transcribe spoken audio.
---
thanks for all your work and let me know if I can provide any more info!
| 12-23-2021 18:06:39 | 12-23-2021 18:06:39 | Hey @pszemraj,
I'm sorry the documentation is really bad here :-/ I'll fix this asap.
`wavlm-base` is just the pretrained model of wavlm. It has no character predicition (CTC) head and therefore it also cannot have a tokenizer.
If you want to try out a fine-tuned wavlm checkpoint you could try out one of those:
https://huggingface.co/models?other=wavlm_libri_finetune
*e.g.*
```python
>>> from transformers import Wav2Vec2Processor, WavLMForCTC
>>> from datasets import load_dataset
>>> import torch
>>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
>>> sampling_rate = dataset.features["audio"].sampling_rate
>>> processor = Wav2Vec2Processor.from_pretrained('patrickvonplaten/wavlm-libri-clean-100h-base-plus')
>>> model = WavLMForCTC.from_pretrained('patrickvonplaten/wavlm-libri-clean-100h-base-plus')
>>> # audio file is decoded on the fly
>>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
>>> logits = model(**inputs).logits
>>> predicted_ids = torch.argmax(logits, dim=-1)
>>> # transcribe speech
>>> transcription = processor.batch_decode(predicted_ids)
>>> # compute loss
>>> with processor.as_target_processor():
... inputs["labels"] = processor(dataset[0]["text"], return_tensors="pt").input_ids
>>> loss = model(**inputs).loss
```<|||||>@anton-l @sgugger @LysandreJik @NielsRogge - besides the bad documentation on my part, this error seems to happen quite a bit:
- https://github.com/microsoft/UniSpeech/issues/15
- https://github.com/huggingface/transformers/issues/14214
I suspect the general problem to be a bit as follows. In NLP, `transformers` users know that no matter what "head" of a model is chosen (whether it's `BertModel`, `BertForPreTraining` or `BertForMaskedLM`) - one always needs a tokenizer. So for every BERT repo the following works:
`BertTokenizer.from_pretrained(...)`.
Now the problem for speech is that actually only the `...ForCTC` head requires **both** a feature extractor and a tokenizer, all other classes `...ForPreTraining`, `...Model`, etc... **only** need the feature extractor and calling `Wav2Vec2Processor` on any class other than `...ForCTC` actually fails. `....ForCTC` is however the most used class, so it's not an edge case.
This doesn't seem to be a good solution for now - it's not intuitive. The reason why `Wav2Vec2Processor` was added back then was mainly so that the API fits better with the general `transformers` API which was always 1 model object and 1 processor object for nlp. To force ASR to also have 1 model object and 1 processor object (instead of 2 - being tokenizer and feature extractor), `Wav2Vec2Processor` was created. Besides all the problems with `Wav2Vec2Processor` (see discussion here: https://github.com/huggingface/transformers/pull/14881), I still think forcing a 1 model, 1 processor API is a very strong argument to have a `Wav2Vec2Processor` class and it's probably difficult to remove it now anyways.
I see two options here:
1) We allow to load a `Wav2Vec2Processor` from a repo without a tokenizer and only throw an error when `batch_decode` or `decode` is called. This way, people can happily use all wav2vec2 models with the `Wav2Vec2Processor` and we can throw a nice error message when users try to transcribe with pretrained models only. The problem here is that it defeats a bit the purpose of having an independent `Wav2Vec2FeatureExtractor` class as one could just always use `Wav2Vec2Processor`....
2) We make it somehow crystal clear that only fine-tuned Wav2Vec2 speech recognition modes should use `Wav2Vec2Processor` and that all other Wav2Vec2 models should use the `...FeatureExtractor` object. The big drawback is here is obviously that users have to remember when to use `...Processor` and when to use `...FeatureExtractor` which is not ideal IMO
So I tend to 1).
Curious to hear if you guys have other (better) ideas
<|||||>BTW, I can very well see the same problem for vision where one model type, *e.g.* ViT only needs a feature extractor for vision classification, but a feature extractor and tokenizer for image captioning <|||||>@patrickvonplaten I agree with option 1, it would significantly streamline our examples and tests.
Also, this PR that I drafted a while back could add some clarity (gotta change other loaders for consistency to merge it): https://github.com/huggingface/transformers/pull/14519<|||||>For me there are two separate issues:
- the detailed doc of the model is using the wrong class:
For a `BertModel` we show the use of a `BertTokenizer`, not `AutoTokenizer`. So here the code samples should use the appropriate class. The `XxxFeatureExtractor` when the model only has one modality, and the `XxxProcessor` when it has two
- the user should be able to use the same `AutoClass` for all the models of an architecture, which corresponds to your option 1.
I agree this require users to be able to load a processor without a tokenizer, and it's fine to enable this when some of the models can function without it.<|||||>@patrickvonplaten chiming in on the specifics: I confirm that
`processor = Wav2Vec2Processor.from_pretrained('patrickvonplaten/wavlm-libri-clean-100h-base-plus')`
(also tried the `large`) works, thanks for the heads up.
FWIW as a user I think option 1 integrates into my workflow(s) / testing more intuitively.
Thanks again + happy holidays! |
transformers | 14,907 | closed | [jax] absl issues | update:
So the problem was that `jax` wasn't detecting a GPU when there was one.
**The solution is to install `jax` correctly for cuda and it is:**
```
pip install --upgrade "jax[cuda]" -f https://storage.googleapis.com/jax-releases/jax_releases.html
```
more details: https://github.com/huggingface/transformers/issues/14907#issuecomment-1000468384
will auto-close this issue when https://github.com/huggingface/transformers/pull/14909 is merged.
---------------------
Original:
```
$ python -c "import transformers.testing_utils"
INFO:absl:Unable to initialize backend 'tpu_driver': NOT_FOUND: Unable to find driver in registry given worker:
INFO:absl:Unable to initialize backend 'gpu': NOT_FOUND: Could not find registered platform with name: "cuda". Available platform names are: Interpreter Host
INFO:absl:Unable to initialize backend 'tpu': INVALID_ARGUMENT: TpuPlatform is not available.
WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
```
The issue comes from `absl-py` package. Don't know anything about it.
Could we please fix it, as this is a JAX issue which impacts everybody and not only JAX users?
The only way I found to turn it off is by explicitly disabling `USE_JAX=0`
I tried upgrading the libs
```
pip install jax jaxlib absl-py -U
```
but the issue is still there, probably did come in the recent libraries:
This seems to be related: https://github.com/huggingface/transformers/issues/12434 but it was never resolved.
The `transformers` was set up to carefully not load any of torch/tf/jax until one of them is actually used. But it doesn't seem to work here.
Thank you.
@patil-suraj | 12-23-2021 17:54:43 | 12-23-2021 17:54:43 | So this is one of the triggers:
```
python -c "import jax; jax.default_backend()"
```
and it is looking for TPUs:
```
TF_CPP_MIN_LOG_LEVEL=0 python -c "import jax; jax.default_backend()"
2021-12-23 10:06:40.249749: I external/org_tensorflow/tensorflow/core/tpu/tpu_initializer_helper.cc:94] libtpu.so already in use by another process. Run "$ sudo lsof -w /dev/accel0" to figure out which process is using the TPU. Not attempting to load libtpu.so in this process.
2021-12-23 10:06:40.249776: I external/org_tensorflow/tensorflow/core/tpu/tpu_api_dlsym_initializer.cc:116] Libtpu path is: libtpu.so
2021-12-23 10:06:40.251821: I external/org_tensorflow/tensorflow/core/tpu/tpu_executor_dlsym_initializer.cc:68] Libtpu path is: libtpu.so
2021-12-23 10:06:40.709202: I external/org_tensorflow/tensorflow/compiler/xla/service/service.cc:171] XLA service 0x55bc1aee0390 initialized for platform Interpreter (this does not guarantee that XLA will be used). Devices:
2021-12-23 10:06:40.709222: I external/org_tensorflow/tensorflow/compiler/xla/service/service.cc:179] StreamExecutor device (0): Interpreter, <undefined>
2021-12-23 10:06:40.711252: I external/org_tensorflow/tensorflow/compiler/xla/pjrt/tfrt_cpu_pjrt_client.cc:165] TfrtCpuClient created.
2021-12-23 10:06:40.711669: I external/org_tensorflow/tensorflow/stream_executor/tpu/tpu_platform_interface.cc:74] No TPU platform found.
WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
```
<|||||>The solution is to install `jax` correctly for cuda and it is:
```
pip install --upgrade "jax[cuda]" -f https://storage.googleapis.com/jax-releases/jax_releases.html
```
not sure how we could help users with this as our auto-dependencies installer can't automatically know if cuda version is needed or not.
It's still looking for TPUs though:
```
python -c "import transformers.testing_utils"
INFO:absl:Unable to initialize backend 'tpu_driver': NOT_FOUND: Unable to find driver in registry given worker:
INFO:absl:Unable to initialize backend 'tpu': INVALID_ARGUMENT: TpuPlatform is not available.
```
but at least it finds the GPU now<|||||>Posted solution at the top of the OP, plus https://github.com/huggingface/transformers/pull/14909 got merged so closing this one. |
transformers | 14,906 | closed | Better logic for getting tokenizer config in AutoTokenizer | # What does this PR do?
In this PR, we make the logic in `AutoTokenizer` a little bit better by checking the tokenizer config is in the list of files of the repo when trying to get it, instead of bluntly trying to load it. This makes the function fail early with a clear error message if the repo name passed contains a typo. | 12-23-2021 17:28:35 | 12-23-2021 17:28:35 | Thanks! |
transformers | 14,905 | closed | Generate does not take into account config.decoder.eos_token_id | As reported by some people (see https://github.com/NielsRogge/Transformers-Tutorials/issues/53 and on the [forum](https://discuss.huggingface.co/t/trocr-repeated-generation/12361)), the `generate()` method currently does not take into account `config.decoder.eos_token_id`, only` config.eos_token_id` to properly stop generation.
Hence, models that are made using `EncoderDecoderModel`/`VisionEncoderDecoderModel`/`SpeechEncoderDecoderModel` will not properly stop generation if `config.eos_token_id` is not set.
cc @patrickvonplaten @patil-suraj | 12-23-2021 17:18:02 | 12-23-2021 17:18:02 | Hmm, yeah I think I'm fine with adding some `if - statements` to the `generate()` method<|||||>Do you want to open a PR for it? :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>i will try to fix it<|||||>I pulled a request https://github.com/huggingface/transformers/pull/15403, but the ci failed. By analysing the ci failture log, I find that it has a hidden logic, if you don't pass a eos_pos_id, you want the model to generate until max-length. That is what the code do
https://github.com/huggingface/transformers/blob/16d4acbfdb547cb922361ba07a13de12e1503fb8/tests/test_modeling_encoder_decoder.py#L404
So, adding self.config.decoder.eos_pos_id simply is not enough.
cc @NielsRogge @patrickvonplaten <|||||>Hi, the above pull request mentioned offhandedly that this had been fixed, is that the case or is this still open as indicated?<|||||>Seems like this got fixed, closing the issue. |
transformers | 14,904 | closed | Update ONNX docs | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR gives the ONNX part of the documentation a much needed update to:
* Explain why we have three ONNX configuration objects (`OnnxConfig`, `OnnxConfigWithPast`, and `OnnxSeq2SeqConfigWithPast`) and how to interpret them.
* Explain what we mean by the `--features` argument of the export CLI.
* Provide an end-to-end guide on how to export a custom model, using the DistilBERT implementation as an example. I chose this approach so readers can run actual code instead of relying on a hypothetical unsupported architecture.
I also removed the deprecated section that was based on the `convert_graph_to_onnx.py` script as it seems to confuse people on which API to use. I couldn't find clear instructions on how the deprecation cycle works for the documentation of scripts like `convert_graph_to_onnx.py`, so please let me know if I should re-instate this part of the docs.
I also tweaked the help messages in the `transformers.onnx` CLI to make things a bit clearer to the end user.
The diff on the docs is quite huge, so I recommend reading the file directly. | 12-23-2021 17:17:32 | 12-23-2021 17:17:32 | This is great, thanks for working on it @lewtun! Pinging @stevhliu and @sgugger for knowledge.<|||||>Thank you for the feedback @sgugger 🙏 !
I've now added the doctest `>>>` syntax to all the Python code blocks and removed the permalinks to the class references. OK to merge once the CI tests pass? |
transformers | 14,903 | closed | Fix failing GPU trainer tests | # What does this PR do?
As discussed, this PR skips the fairscale tests until we update the container running them, and fixes the multiGPU failing test in the Trainer. | 12-23-2021 17:03:19 | 12-23-2021 17:03:19 | |
transformers | 14,902 | closed | [Tests] Update speech diarization and WavLM tolerances | # What does this PR do?
Turns out the difference in `nn.GroupNorm` between torch `1.9` and `1.10` (https://github.com/pytorch/pytorch/issues/67907) is more noticeable than I initially expected. This PR updates the tolerances on some newer tests where the reference values were obtained on `1.10`. When we update the CI env to `1.10`, these can be safely reverted.
| 12-23-2021 16:45:52 | 12-23-2021 16:45:52 | `FAILED examples/pytorch/test_examples.py::ExamplesTests::test_run_image_classification` is unrelated, merging... |
transformers | 14,901 | closed | Adding tokens to pretrained model "Helsinki-NLP/opus-tatoeba-en-ja" using tokens from vietnamese not working | Hello,
I am working on a code of the paper regarding multilingual model with multistage finetuning, I am using trainer API of huggingface for the finetuning of pretrained model from english to japanese with dataset containing vietnamese sentences, but before I want to modify the pretrained tokenizer adding tokens from other pretrained model tokenizer that recognize vietnamese
model_checkpoint = "Helsinki-NLP/opus-tatoeba-en-ja"
model_mbart = "facebook/mbart-large-50-one-to-many-mmt"
mbart_tokenizer=MBart50Tokenizer.from_pretrained(model_mbart)
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
tokenizer.add_tokens([vocab for vocab in mbart_tokenizer.get_vocab().keys())
model.resize_token_embeddings(len(tokenizer))
Before modifying the tokenizer, when tokenizing a vietnamese sentence i got 72 then i try to tokenize a vietnamese sentence with the modified tokenizer i got 66 tokens whereas using mbart50 i get 24 .
Can you please tell me what I am doing wrong?? | 12-23-2021 16:17:22 | 12-23-2021 16:17:22 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,900 | closed | [AutoTokenizer] Fix incorrect from pretrained | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes `tests/test_tokenization_auto.py::AutoTokenizerTest::test_tokenizer_identifier_non_existent` on master.
Currently:
```python
from transformers import AutoTokenizer; AutoTokenizer.from_pretrained('dont_exist')
```
yields an incorrect error message
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-23-2021 16:14:54 | 12-23-2021 16:14:54 | > Thanks for the hot fix! Will make a nicer one in a couple of hours.
Perfect |
transformers | 14,899 | closed | GPT-J: Implement Memory Efficient Attention | Just a note that it would be great to add memory efficient attention ( https://github.com/AminRezaei0x443/memory-efficient-attention ) for larger models. It might be nice to add an option to reuse the memory of input hidden states, too, if the user doesn't need them. | 12-23-2021 16:01:23 | 12-23-2021 16:01:23 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,898 | closed | Add 3D attention_mask input support | Add 3D attention_mask input support. (The PyTorch version support, but TF cant.)
Sometimes we need to input custom 3D attention_mask.
```python
import tensorflow as tf
from transformers import TFBertModel
plm_name = "bert-base-chinese"
plm = TFBertModel.from_pretrained(plm_name, return_dict=False)
i = tf.ones((1, 512), dtype=tf.int32)
m = tf.ones((1, 512, 512), dtype=tf.int32)
a, b = plm((i, m, i))
print(a)
print(b)
```
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue) Add 3D attention_mask input support.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@LysandreJik @sgugger @patil-suraj
| 12-23-2021 14:02:35 | 12-23-2021 14:02:35 | This code will be copied to RemBERT and RoBERTa as well, because those inherit the same `call()` method. I think we can accept the PR with just a BERT test, though I'm not sure - @sgugger WDYT?<|||||>I think it's important to also have parity for the rest of the TF models - can all models accept the 3D attention mask? How hard is it to propagate this to all models? I don't think it's best if BERT can accept differing inputs than other TF models. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,897 | closed | add custom stopping criteria to human eval script | This PR adds the new custom `stopping_criteria` feature in `generate` (#14779) to the code evaluation script at `scripts/human_eval.py`.
In HumanEval the task is to complete a function and with this feature the generation is stopped once the function body is complete. To do this a set of keywords, named end-of-function (EOF), is used. If any of these EOF keywords appear in the generation the function body must be finished. E.g. if `\ndef` is in the generated sequence this means that the current body was completed.
This feature increases the generation speed for HumanEval by 2-3x (from 8h to ~3h). | 12-23-2021 13:08:53 | 12-23-2021 13:08:53 | Human eval crashes for me, and I believe this PR is to blame. Here's the stack:
```
Traceback (most recent call last):
File "human_eval.py", line 126, in <module>
main()
File "human_eval.py", line 106, in main
task_generations.extend(complete_code(pipe, prompt, num_completions=args.batch_size, **gen_kwargs))
File "human_eval.py", line 50, in complete_code
code_gens = pipe(prompt, num_return_sequences=num_completions, **gen_kwargs)
File "/opt/conda/lib/python3.7/site-packages/transformers/pipelines/text_generation.py", line 150, in __call__
return super().__call__(text_inputs, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/transformers/pipelines/base.py", line 924, in __call__
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "/opt/conda/lib/python3.7/site-packages/transformers/pipelines/base.py", line 931, in run_single
model_outputs = self.forward(model_inputs, **forward_params)
File "/opt/conda/lib/python3.7/site-packages/transformers/pipelines/base.py", line 880, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "/opt/conda/lib/python3.7/site-packages/transformers/pipelines/text_generation.py", line 165, in _forward
generated_sequence = self.model.generate(input_ids=input_ids, **generate_kwargs) # BS x SL
File "/opt/conda/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/transformers/generation_utils.py", line 1027, in generate
**model_kwargs,
TypeError: sample() got multiple values for keyword argument 'stopping_criteria'
```
I'm using Transformers 4.12.2. It seems like `sample` call in `generation_utils.py` already has `stopping_criteria` keyword argument set. <|||||>Indeed, this requires `transformers==4.15.0`, where handling of custom stopping criteria was introduced. See changes to `requirements.txt`. Hope upgrading works! |
transformers | 14,896 | closed | Large audio chunking for the existing ASR pipeline | # What does this PR do?
This adds audio chunking with fixed-sized chunks as a first step to enabling audio streaming in ASR pipelines (ref: https://github.com/huggingface/transformers/pull/14250)
In this iteration there's no ffmpeg streaming or VAD, just simple slicing of inputs with padding, so that we can review the general pipeline **from the modeling side**.
Here's an illustration of the sliding window approach used for iterating over chunks:

To see how this will roughly look for real-time inference (when we implement it on top), check out this (admittedly old) demo: https://huggingface.co/spaces/anton-l/youtube-subs-wav2vec/ | 12-23-2021 12:39:15 | 12-23-2021 12:39:15 | @Narsil sorry, I only now remembered about the work in [#14225](https://github.com/huggingface/transformers/pull/14225)!
Indeed, the `ChunkPipeline` API is much cleaner and I can adapt this PR to work with it.
Re: 2-4) totally agree, these points should be refactored as you suggest!
Re: 5) If I understand correctly, you suggest using `ChunkDataset/VADChunkDataset` only for offline datasets? Then we would still need to take parts of their chunking logic outside, to reuse them for streaming inputs
<|||||>> @Narsil sorry, I only now remembered about the work in [#14225](https://github.com/huggingface/transformers/pull/14225)! Indeed, the `ChunkPipeline` API is much cleaner and I can adapt this PR to work with it.
It got autoclosed so even I struggled to find it again yesterday :)
>
> Re: 2-4) totally agree, these points should be refactored as you suggest!
> Re: 5) If I understand correctly, you suggest using `ChunkDataset/VADChunkDataset` only for offline datasets? Then we would still need to take parts of their chunking logic outside, to reuse them for streaming inputs
Actually, we might be able to make them interoperable too.
Well I am mentionning `Dataset` but actually the pipeline works with any `generator` so we can use that for streaming too ! This is actually what `ffmpeg_microphone` does.
I like mentioning `Dataset` since a dataset has a fixed number of elements, meaning `tqdm` and the like can infer a nice progress bar and time estimates. But everything works quite the same with a `generator` except:
- `num_workers` cannot be used with values >1 (fetching from a generator from multiple threads is asking for trouble since you need to iterate on ALL objects on EVERY thread, even if you skip some on some threads, most likely the generator will already consume resources and time)
- No nice progress bar and time estimate with `tqdm`.
So we could imagine something like:
```python
dataset = datasets.load_dataset(...)
vad_dataset = vad_cut(dataset, threshold=5, ..)
chunk_dataset = chunk_audio(dataset, chunk_len_ms=200)
```
or
```python
microphone_generator = ffmpeg_microphone(...)
for chunk in pipe(chunk_audio(microphone, chunk_len_ms=200)):
print(chunk)
```
I am unsure it makes total sense and that we should make ALL of them interoperable and such, but there definitely could be a nice way to make those chunking iterator composable (just like `torchvision.transforms` can be for instance).
It would be a very nice thing indeed.<|||||>Thanks for the feedback @Narsil! I don't fully agree here - happy to discuss this a bit more (also in a call). Maybe I'm also not seeing something here.
> I think it's a good intention PR, but IMHO we should probably refactor to pull everything out of the pipeline code for several reasons. I actually started work on #14250 in a similar fashion then scrapped everything. The reasons are
>
> 1. it doesn't play nice with the auto batching / `DataLoader` framework explaining some failing tests most likely:
>
> ```python
> pipe = pipeline(..)
> for out in pipe(dataset, batch_size=32):
> # do something with out
> ```
>
> Using this allows users to adjust the batch_size relative to the hardware they have to maximize performance.
>
> This unfortunately means:
>
> * No loop in `_forward`.
> * No loop/batch in `preprocess`.
>
> This PR could enable it back again: #14225 (it was the main reason I started this PR in the first place).
Here, I don't really know whether it plays nicely with auto batching or not. I agree that it is very important to make it work nicely with auto batching since I can see companies being interesting in transcribing tons of audio files offline and it would be nice to have that working fast.
However, for me it's at least equally important to make sure it's **very** simple for the user to transcribe a single audio file. The main applications for this feature in my opinion are:
- demo widget. Orgs on the hub will want to demo their models to clients, internally, etc... we have already seen demand for this
- should be easy to build a space with this feature to transcribe long audio files of e.g. Videos (YouTube, TED) on the fly
IMO those features are **more** important than auto batching / `Data Loader`. A necessary requirement to make chunking easy to use is that one doesn't have to wrap her/his audio file in some kind of data loader, wrapper, datasets, etc. I feel pretty strongly about making it possible to just do:
```python
transcription = asr("<path/to/audio/file>", chunking_length_in_s=2)
```
> 2. The arguments are not in `_sanitize_parameters` which means they will be only used in the initalization in the pipeline and not at call time. Since historically they were a lot of issues there, now all arguments are enabled in both which saves user sanity without figuring out where the arguments should be defined (look at other pipelines to see how they are implemented, its not a big change code wise, and the current doc is correct).
Agree. We can however easily change that.
> 3. `chunking`, `chunking_len`, `chuking_start_padding_len` and `chunk_end_padding_len` are not making sense independently.
> For arguments, it is always easier for the user if there's no interaction between arguments and you can independently modify any of them. Here we could imagine having a single `chunking` that be either `None` or a triplet `(len, start_padding, stop_padding)` for instance. That would reduce a lot of confusion IMHO.
Disagree here. I really don't like tuple input arguments as one never knows which index of the tuple stands for what and this has to be looked up again in the docs every time someone uses the pipelines. We don't have many tuple args in `transformers` in general, but rather prefer "simple" args (in the config, function, etc...).
I also think they are quite independent from each other - changing one arg "`chunking_len`" doesn't mean that `"chunk_end_padding_len"` has to be changed either.
My 5 cents here are:
- 1.) Remove the `"chunking"` input arg. I don't like boolean flags either and I think whether the input shoudl be chunked or not should be controlled by `"chunking_len"`. If `"chunking_len"` is > 0, then no chunking else chunking.
- 2.) (nit) I would write out `len` to `length` and IMO `chunking_padding_left` is easier to understand then "...start..."
> 4. `chunking_start_padding_len` is expressed in number of samples which again depend on `sampling_rate` which might be different for different models, meaning quality might change if you use a different model. I would much prefer have `start_padding_len_ms` for instance since as a user it makes more sense to adjust in that space than in raw examples length space.
Agree. Nice observation!
> 5. If we want to add `webrtc` (which exists in #14250 ) then we have to add a whole bunch of new arguments, which would conflict with these added ones.
Not sure I fully agree here. Why would those new arguments conflict with `chunking_length`? IMO, `chunking_length` should default to either 0 or `None`, *i.e.* be disabled. In the future we could image a `vad="webrtc"` argument or `vad=WebRTCVAD("<all_necessary_args_here>")` argument and I don't really see why conflicts with `"chunking_length"` or chunking_length_left/right . `"chunking_length"` can still be used (and refer to the maximum allowed chunk length) and the other two arguments could simply be set to 0.
> IMO it would make a whole more sense to instead choose the following form for users:
>
> ```python
> pipe = pipeline(...)
> dataset = load_dataset(...)
> chunk_dataset = ChunkDataset(dataset, length_ms, start_pad_ms, end_pad_ms)
>
> for chunk in pipe(dataset):
> print(chunk)
> ```
>
> In this way we could add `VADChunkDataset(dataset, threshold=2, frame_size_ms=20)` for instance quite orthogonally without cluttering the pipeline with a ton of arguments.
>
This looks clean, but it's not easy to understand for the user. What if I just have a single long audio file that I want to transcribe? Do I first need to put it in some kind of `dataset` format? Users won't make that effort, they'll simple stop at this point. This relates to 1.) and here it's much more important to make this feature easy to try out instead of having something super extendable/general . For me pipelines always had the spirit of "2 lines of code is enough" and this starts to look much more complex for the user.
Also given that already have a bunch of input arguments for pipelines ("generate()") takes 50+ input arguments, I don't really see a problem with adding many new arguments + we can also mark something as experimental and change later.
If you are very strongly against just adding input arguments @Narsil, maybe we can find a compromise where we over some kind of object `ChunkWithPadding` that is required to have a `chuck(...)` method and can wrap all kinds of inputs, *e.g.*:
```python
from transformers import pipeline, ChunkWithPadding
asr = pipeline("automatic_speech_recognition")
chunked_audio = ChunkWithPadding("path/to/audio/file", "<args>")
asr(chunked_audio) # here inside we call `.chunk()` at some point
```
But I don't think that's very clean and I'd much rather prefer to do:
```python
from transformers import pipeline, ChunkWithPadding
asr = pipeline("automatic_speech_recognition", chunking_length_in_s=10)
asr("/path/to/audio")
```
> It also enables more complex stuff like `ffmpeg_microphone` where the audio samples actually overlap and some results are "temporary" (replaced with other chunks later).
>
> It also enables fast feedback because you get the results as soon as they come in (without waiting when it's an hour long audio sample for instance).
>
> The main drawback from this approach is that you don't have any information within `chunk` to know from which file it comes from or which chunk it is. It might be important to recreate the end result as a user. But IMHO, it seems easier to start passing those information through the pipeline so that they are available within `chunk` to allow the user to assemble chunks as they see fit.
>
> I am happy to discuss anything that I might have overlooked in this analysis and why this PR might still be the right solution. Again this was also my first idea.
=> So to summarize, my by far biggest concern here is that the feature is to difficult to use for the user. I think we should think more about the user experience and not so much about making it super general, 100% clean from the inside. Pipelines are IMO the part of `transformers` we can and should absorb complexity so that the user has a **very** good user experience at the cost of maybe some ugly code inside `pipelines`.
It would be nice to focus here on a first solution that works well and has a nice user experience before thinking about how this feature could conflict with a feature that will potentially added in the future. If I understand correctly the "padding-chunk" approach works well in all kinds of settings and is also very light weight. There are no necessary imports of other libraries etc...Webrtc seems to work only equally well, but is more data dependent (does it work well with noise, different language?!), it has an important dependency and an additional model that needs to be loaded. So for me (if the above is correct), it's pretty clear that we should focus on the padding chunking approach in a first step. We don't even know if the users will use this a lot or not. It's very good to also think about how this would pan out with a future WebRTC integration, but I also don't really see a problem here with the argument `"chunking_length_in_s"`.
Regarding the technical details, I'm not really sure how to solve this, but I also much rather prefer to add a small hack or a new design, etc... instead of forcing the user to load a single audio sample in some kind of dataset or generator.
Happy to jump on a call about this!
<|||||>Pretty big but important clarification I didn't get when I read the PR:
The purple thing on the diagram is coming from the real audio, and the pipeline can cut on the green boundaries during decodings on `ids` tensor (before CTC). meaning we should pretty much exactly the exact decoding than when running the full audio (as long as the full chunk green + purple covers the whole word).
The name `padding` is slightly misleading to me, since it's usually called `stride` (at least in `question-answering` and vision). I imagined the purple part was supposed to be `zeros`, meaning it would help the edge of the green data but not nearly as well as with real data (and could be done outside of the pipeline, which would be much harder to do with real data, since once you do CTC you loose all those Green/purple information.
- `padding` : Fill in with zeros
- `stride`: Make overlapping data within samples
This approach could definitely work and be integrated in the pipeline (since CTC would make information be lost otherwise).
- It needs to be CTC-only: it's unlikely to be sound for generative audio models, since there's no 1-1 mapping of audio samples to IDs.
- There's actually probably sane defaults for the stride length since a word should rarely exceed 30s. And those samples do fit most user-grade GPU. We can also have something like `qa` where the `stride` is defined by default as `1/4 of max_length` if my memory serves correctly.
All-in-all. Current PR approach should work and be done like it is right now (and having the utils externally for VAD and the like in the other PR). We just need to enabling `ChunkPipeline` here to make it work properly with `batch_size` (merged since)<|||||>> Pretty big but important clarification I didn't get when I read the PR:
>
> The purple thing on the diagram is coming from the real audio, and the pipeline can cut on the green boundaries during decodings on `ids` tensor (before CTC). meaning we should pretty much exactly the exact decoding than when running the full audio (as long as the full chunk green + purple covers the whole word).
>
> The name `padding` is slightly misleading to me, since it's usually called `stride` (at least in `question-answering` and vision). I imagined the purple part was supposed to be `zeros`, meaning it would help the edge of the green data but not nearly as well as with real data (and could be done outside of the pipeline, which would be much harder to do with real data, since once you do CTC you loose all those Green/purple information.
>
> * `padding` : Fill in with zeros
> * `stride`: Make overlapping data within samples
>
> This approach could definitely work and be integrated in the pipeline (since CTC would make information be lost otherwise).
>
> * It needs to be CTC-only: it's unlikely to be sound for generative audio models, since there's no 1-1 mapping of audio samples to IDs.
> * There's actually probably sane defaults for the stride length since a word should rarely exceed 30s. And those samples do fit most user-grade GPU. We can also have something like `qa` where the `stride` is defined by default as `1/4 of max_length` if my memory serves correctly.
>
> All-in-all. Current PR approach should work and be done like it is right now (and having the utils externally for VAD and the like in the other PR). We just need to enabling `ChunkPipeline` here to make it work properly with `batch_size` (merged since)
Thanks a lot for summarizing everything here! After the call I very much agree that `stride_length_in_sec` is a better name indeed<|||||>@patrickvonplaten Could you review https://github.com/huggingface/transformers/pull/14250 too ? (it needs a rebase but I think it would be nice if both PRs are available roughly at the same time.<|||||>Let's do that.<|||||>Would be nice if we could make a short blog post showcasing how this feature works! Think we'll just need an hour long audio clip or so (maybe a clean speech from someone, *e.g.* US president) and then a couple of lines of code :-) <|||||>Probably don't even need the blog post. Think an entry here: https://discuss.huggingface.co/ would be enough |
transformers | 14,895 | closed | Add ViLT | # What does this PR do?
This PR adds [ViLT](https://arxiv.org/abs/2102.03334) (Vision and Language Transformer).
It's a very nice, minimal multi-modal model, as it only adds a text embedding layer to an existing ViT.
I've defined the following head models:
* `ViltForMaskedLM`
* `ViltForVisualQuestionAnswering`
* `ViltForNaturalLanguageVisualReasoning`
* `ViltForImageRetrievalTextRetrieval` (CLIP-like model).
To do:
- [x] add `ViltForNaturalLanguageVisualReasoning` to the tests. However, I do have a question here: it's the only model that requires `config.modality_type_vocab_size = 3` instead of 2. How can I handle this exception in the tests? I could do it like this:
```
for model_class in self.all_model_classes:
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
config.return_dict = True
if model_class.__name__ == "ViltForNaturalLanguageVisualReasoning":
config.modality_type_vocab_size = 3
```
But that's not ideal as it would require overwrite each individual test.
Update: fixed by create a separate `ModelTester` for this particular model, that overrides the `get_config`. | 12-23-2021 11:11:40 | 12-23-2021 11:11:40 | I'd like the PR to be green (or mostly green) before reviewing.<|||||>@sgugger should be mostly green now.<|||||>Note: with the new build dev job merged, you can preview the doc [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_14895/en/model_doc/vilt) :-) <|||||>Great job merging this PR! the documentation will now be removed from the staging environment. |
transformers | 14,894 | closed | Set `run_name` in MLflowCallback | # What does this PR do?
Currently, when using `mlflow` integration to track experiments, runs are being logged as nameless. E.g. see below (image from #8519). This PR is a simple one-line fix of passing `args.run_name` (which is currently used for `wandb`) to `mlflow.start_run`. It also updates the doc of `args.run_name`.

<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #8519, #12841
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-23-2021 09:29:36 | 12-23-2021 09:29:36 | |
transformers | 14,893 | closed | support the trocr small models | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
The current TrOCRProcessor does not support the TrOCR small models, since the small models use the sentencepiece as the tokenizer, i.e. the XLMRobertaTokenizer. This PR modified the limitation of tokenizer in TrOCRProcessor and corresponding document.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@NielsRogge
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-23-2021 07:42:06 | 12-23-2021 07:42:06 | |
transformers | 14,892 | closed | [doc] bug in docstring conversion | There is an issue of rendering items in a docstring (in some places)
Here is an example:
https://huggingface.co/docs/transformers/main_classes/logging#transformers.utils.logging.set_verbosity
has a lose `<`
but I'm not sure where this one is coming from
It appears to be a messed up `<ul>` block:
```
Logging level, e.g., one of:</p>
<ul>
<li><code>transformers.logging.CRITICAL</code> or <code>transformers.logging.FATAL</code></li>
<li><code>transformers.logging.ERROR</code></li>
<li><code>transformers.logging.WARNING</code> or <code>transformers.logging.WARN</code></li>
<li><code>transformers.logging.INFO</code></li>
<li><code>transformers.logging.DEBUG</code></li>
<
</span></span>
</li></ul>
```
So the following chunk shouldn't be there:
```
<
</span></span>
</li>
```
It's a systemic issue, here are more examples:
https://huggingface.co/docs/transformers/main_classes/data_collator#transformers.DataCollatorWithPadding
and several after it.
@sgugger | 12-23-2021 04:13:42 | 12-23-2021 04:13:42 | It seems to be happening every time we have a list in a parameter description. Filing an [issue](https://github.com/huggingface/doc-builder/issues/70) on `doc-builder` for this, Mishig will have a look when he's back from vacation :-)
Closing on this side as there is nothing to do in Transformers. The generated MDX is correct. |
transformers | 14,891 | closed | Turn of do_sample for T0pp in Inference API | Hi,
I'm trying to use the Inference API with T0pp. I would like to set `do_sample=False` but it doesn't let me specify `parameters` as in the example [here](https://api-inference.huggingface.co/docs/python/html/detailed_parameters.html#summarization-task).
Code:
API_URL = "https://api-inference.huggingface.co/pipeline/text2text-generation/bigscience/T0pp"
headers = {"Authorization": f"Bearer {API_TOKEN}"}
payload = {"inputs": "Titanic was built in", "parameters": {"do_sample":False}}
data = json.dumps(payload)
response = requests.request("POST", API_URL, headers=headers, data=data)
ans = json.loads(response.content.decode("utf-8"))
print(ans)
Output I'm getting:
{'error': 'Parameters are not accepted for this specific model'}
My environment:
- `transformers` version: 4.12.5
- Platform: Linux-4.14.254-llgrid-10ms-x86_64-with-debian-buster-sid
- Python version: 3.7.11
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
Thank you! | 12-23-2021 04:01:43 | 12-23-2021 04:01:43 | cc @Narsil <|||||>Hi @feyzaakyurek ,
Thanks. Currently (and in the short term) this won't be available.
`bigscience/T0pp` is a very large model (40Go) meaning we deploy it in a very specific way. Here we used `flax` and it's deployed on a TPU for instance. This enables quite fast inference speed but it has drawbacks, for instance, `flax` is much less dynamic than `pytorch` when it comes to inference (the graph needs to be compiled to get those fast inference times).
This is not something that will never be available, we started some work to make the pipelines (they power the API) support `flax` in a more integrated way. You can track progress here: https://github.com/huggingface/transformers/pull/14356. Hopefully we can bridge the gap and enable that later.
But the truth is that flax has made different choices than Pytorch, which means we might not support exactly the same arguments for both frameworks. or the same flexibility. But we'll try to.<|||||>Thank you so much for your prompt response, I understand.
I'm on Pro Plan for the Inference API right now and does your answer suggest that there is no difference between Pro plan and one of the organization plans in terms of which processor is being used (i.e. it's always TPU)? Would the inference be any faster if I were to switch to Pay as you go?
https://huggingface.co/pricing
Thank you!<|||||>Hi @feyzaakyurek ,
If you are a paying user then we should probably start a discussion with [email protected].
This model if part of the large model family which don't get deployed automatically and we usually start a discussion to understand your needs first. But this is definitely a good option where we can consider deploying with custom parameters.
What I mentioned concerned `bigscience/T0pp` as currently deployed for everyone.<|||||>Thank you, have reached out to them. |
transformers | 14,890 | closed | [doc] post-porting | This PR fixes a `::` leftover
See:
https://huggingface.co/docs/transformers/main_classes/logging#transformers.utils.logging.enable_explicit_format
converted it into a formatted section
@sgugger | 12-23-2021 03:52:52 | 12-23-2021 03:52:52 | |
transformers | 14,889 | closed | [logging] unable to turn off tqdm logging | I'm writing a program in a notebook where I'm printing a results table for multiple models and I can't figure out how to turn off tqdm here as it downloads new models since its output breaks the table and adds a ton of unnecessary noise to the notebook's outputs.
So here is where the control is:
https://github.com/huggingface/transformers/blob/207594be81b8e5a8589c8b11c3b236924555d806/src/transformers/file_utils.py#L1890-L1898
Why does it turn off only if logging is `logging.NOTSET`? `logging.NOTSET` is a special level. If it's `NOTSET` then the next handler's log level is used, so we end up falling back to the root's default log level which is WARNING. (detailed info: https://docs.python.org/3/library/logging.html#logging.Logger.setLevel)
In other words:
https://github.com/huggingface/transformers/blob/207594be81b8e5a8589c8b11c3b236924555d806/src/transformers/file_utils.py#L1897
will never be true.
So we need to choose a true level at which tqdm is enabled. If it's an info log level, then most likely something like:
`disable=bool(logging.get_verbosity() >= logging.WARNING)`?
Thank you
@LysandreJik, @sgugger
-----------
a quick test for myself to check that tqdm can be turned off once we sort it out (assuming this model is not cached):
```
python -c "import transformers, logging; transformers.logging.set_verbosity_error(); \
transformers.AutoModel.from_pretrained('google/pegasus-pubmed')"
```
shouldn't log anything, but try `set_verbosity_info` first to ensure that the model isn't cached yet and kill it before it finishes.
| 12-23-2021 02:25:13 | 12-23-2021 02:25:13 | Hey @stas00, thanks for raising an issue! I think the `WARNING` level is probably too low, I'd expect to see some tqdm bars at this level. I think even in `ERROR` we would expect some tqdm bars?
I believe that the `datasets` team uses a specific logging command to turn tqdm bars off:
https://github.com/huggingface/datasets/blob/1aa09c9f5d7886ca1d3824ce5f9f3b82356b7fd2/src/datasets/utils/tqdm_utils.py#L71-L80
What do you think of having something similar? For many things, having progress bars disabled could mean waiting for a very very long time without any info about what's happening, so I feel like it's a bit orthogonal to the log level.<|||||>> For many things, having progress bars disabled could mean waiting for a very very long time without any info about what's happening, so I feel like it's a bit orthogonal to the log level.
a log is a log is a log is a log - a user should be able to control logging regardless of its function, IMHO, of course.
for example, while in the console it's hard to know what's going on, in a notebook one can see exactly where the processing is, so it's much less of an issue.
> I believe that the datasets team uses a specific logging command to turn tqdm bars off:
It's totally fine with me if we do the same for `transformers`. Are you OK with re-using the same API as you linked to?
We can then document it in the logging doc.
Actually, I have just thought of a using a stream catcher
```
from transformers.testing_utils import CaptureStd
with CaptureStd():
transformers.AutoModel.from_pretrained('google/pegasus-pubmed')"
```
but no, tqdm doesn't use normal std streams and thus can't be captured.<|||||>Yes, perfectly OK with re-using the same API!<|||||>Hello @stas00, have you started working on this? If not, I was wondering if I could attempt a PR!<|||||>yes, please, @jaketae - thank you! |
transformers | 14,888 | closed | Convert rst files | # What does this PR do?
This PR converts all remaining rst files to mdx and adapts the templates as well as the add_new_model command. The script for checking the table in the serialization page also needs a slight update. | 12-22-2021 20:27:27 | 12-22-2021 20:27:27 | Merging and lookin if the docs are alright :-) <|||||>Fantastic, thanks for taking care of the rest @sgugger! |
transformers | 14,887 | closed | Properly indent return block | # What does this PR do?
This fixes the indentation for some return blocks. A check in the `doc-builder` will soon error for bad blocks like this. | 12-22-2021 17:26:03 | 12-22-2021 17:26:03 | |
transformers | 14,886 | closed | Running MLM pretraining with not "line_by_line" big dataset | # 🚀 Feature request
BERT pretraining with not line_by_line big dataset.
## Motivation
I'm trying to pretrain a BERT model with the classic masking task (without NSP), using examples/pytorch/language_modeling/run_mlm.py script. As with the original BERT, I would not like to construct each example of the batch using only one sentence, but would like to take a portion of text as long as max_sequence_lenght. This feature is provided if you set the "line_by_line" parameter to false with the "group_texts" function.
The problem is that I have a big dataset and I have to use "on-the-fly" tokenization with set_transform (as suggested in #10204). Given that, I can't map my dataset with group_texts anymore.
Is it possible to make it possible?
@sgugger
| 12-22-2021 16:41:55 | 12-22-2021 16:41:55 | Hi there!
As mentioned in the README, examples are just examples. We can't show every single usecase everyone will want, so it's up to you to adapt them to your needs :-). If you need help, you can reach out on the [forums](https://discuss.huggingface.co/) or our Discord.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,885 | closed | Fix installation instructions for BART ONNX example | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes a missing step in the installation instructions of the BART ONNX example for summarization.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-22-2021 16:29:43 | 12-22-2021 16:29:43 | |
transformers | 14,884 | closed | TrOCR processor cannot be loaded from AutoProcessor | The following works on current `master`:
```py
>>> from transformers import TrOCRProcessor
>>> processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-printed")
Downloading: 100%|██████████| 878k/878k [00:00<00:00, 1.61MB/s]
Downloading: 100%|██████████| 446k/446k [00:00<00:00, 1.03MB/s]
Downloading: 100%|██████████| 772/772 [00:00<00:00, 1.18MB/s]
Downloading: 100%|██████████| 1.28k/1.28k [00:00<00:00, 1.70MB/s]
```
The following does not:
```
>>> from transformers import AutoProcessor
>>> processor = AutoProcessor.from_pretrained("microsoft/trocr-base-printed")
Downloading: 100%|██████████| 228/228 [00:00<00:00, 404kB/s]
Downloading: 100%|██████████| 4.03k/4.03k [00:00<00:00, 6.88MB/s]
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/home/lysandre/Workspaces/Python/transformers/src/transformers/models/auto/processing_auto.py", line 171, in from_pretrained
return PROCESSOR_MAPPING[type(config)].from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/home/lysandre/Workspaces/Python/transformers/src/transformers/models/auto/auto_factory.py", line 559, in __getitem__
raise KeyError(key)
KeyError: <class 'transformers.models.vision_encoder_decoder.configuration_vision_encoder_decoder.VisionEncoderDecoderConfig'
```
cc @NielsRogge | 12-22-2021 14:40:51 | 12-22-2021 14:40:51 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Currently looking into this<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Reopening this issue since I'm getting a new error
```python
>>> from transformers import TrOCRProcessor
>>> processor = TrOCRProcessor.from_pretrained("microsoft/trocr-small-stage1")
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-14-3c6b951b36db>](https://localhost:8080/#) in <module>()
1 from transformers import TrOCRProcessor
2
----> 3 processor = TrOCRProcessor.from_pretrained("microsoft/trocr-small-stage1")
4 train_dataset = IAMDataset(root_dir=data_path,
5 df=train_df,
6 frames
[/usr/local/lib/python3.7/dist-packages/transformers/processing_utils.py](https://localhost:8080/#) in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
184 [`~tokenization_utils_base.PreTrainedTokenizer.from_pretrained`].
185 """
--> 186 args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, **kwargs)
187 return cls(*args)
188
[/usr/local/lib/python3.7/dist-packages/transformers/processing_utils.py](https://localhost:8080/#) in _get_arguments_from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
228 attribute_class = getattr(transformers_module, class_name)
229
--> 230 args.append(attribute_class.from_pretrained(pretrained_model_name_or_path, **kwargs))
231 return args
232
[/usr/local/lib/python3.7/dist-packages/transformers/models/auto/tokenization_auto.py](https://localhost:8080/#) in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
526 f"Tokenizer class {tokenizer_class_candidate} does not exist or is not currently imported."
527 )
--> 528 return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
529
530 # Otherwise we have to be creative.
[/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py](https://localhost:8080/#) in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1785 use_auth_token=use_auth_token,
1786 cache_dir=cache_dir,
-> 1787 **kwargs,
1788 )
1789
[/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py](https://localhost:8080/#) in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, use_auth_token, cache_dir, *init_inputs, **kwargs)
1913 # Instantiate tokenizer.
1914 try:
-> 1915 tokenizer = cls(*init_inputs, **init_kwargs)
1916 except OSError:
1917 raise OSError(
[/usr/local/lib/python3.7/dist-packages/transformers/models/xlm_roberta/tokenization_xlm_roberta_fast.py](https://localhost:8080/#) in __init__(self, vocab_file, tokenizer_file, bos_token, eos_token, sep_token, cls_token, unk_token, pad_token, mask_token, **kwargs)
147 pad_token=pad_token,
148 mask_token=mask_token,
--> 149 **kwargs,
150 )
151
[/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_fast.py](https://localhost:8080/#) in __init__(self, *args, **kwargs)
117 else:
118 raise ValueError(
--> 119 "Couldn't instantiate the backend tokenizer from one of: \n"
120 "(1) a `tokenizers` library serialization file, \n"
121 "(2) a slow tokenizer instance to convert or \n"
ValueError: Couldn't instantiate the backend tokenizer from one of:
(1) a `tokenizers` library serialization file,
(2) a slow tokenizer instance to convert or
(3) an equivalent slow tokenizer class to instantiate and convert.
You need to have sentencepiece installed to convert a slow tokenizer to a fast one.
```
edit: Already fixed in this issue https://github.com/huggingface/transformers/issues/9750#issuecomment-766862107 , my bad.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I verified with v4.19, and there is no issue. @NouamaneTazi Could you try the new version?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>It's working now. Thanks for your help! 🤗<|||||>loaded_model = VisionEncoderDecoderModel.from_pretrained('/content/drive/MyDrive/ocr_pth/checkpoint-5000')
processor = TrOCRProcessor.from_pretrained("/content/drive/MyDrive/ocr_pth/checkpoint-5000")
KeyError Traceback (most recent call last)
[<ipython-input-12-a5659b723d72>](https://localhost:8080/#) in <module>
1 # loaded_preprocessor = TrOCRProcessor.from_pretrained('/content/drive/MyDrive/ocr_pth/checkpoint-5000')
2 loaded_model = VisionEncoderDecoderModel.from_pretrained('/content/drive/MyDrive/ocr_pth/checkpoint-5000')
----> 3 processor = TrOCRProcessor.from_pretrained("/content/drive/MyDrive/ocr_pth/checkpoint-5000")
3 frames
[/usr/local/lib/python3.7/dist-packages/transformers/models/auto/auto_factory.py](https://localhost:8080/#) in __getitem__(self, key)
570 model_name = self._model_mapping[mtype]
571 return self._load_attr_from_module(mtype, model_name)
--> 572 raise KeyError(key)
573
574 def _load_attr_from_module(self, model_type, attr):
KeyError: <class 'transformers.models.vision_encoder_decoder.configuration_vision_encoder_decoder.VisionEncoderDecoderConfig'>
kindly guide me how resolve this problem.Thanks<|||||>Hi @zainali60 What's your `transformers` version? Could you try the latest one? If the same error still occurs, could you open a new issue with a small but self-contained code snippet to reproduce the issue? Thank you!<|||||>Thanks for reply @ydshieh i resolve my issue and get results
if you have implementation scratch ocr transformers model kindly share me.thanks<|||||>Hi, I'm getting same error.. I tried v4.19, v4.21.3, v4.22.1 which is the last version so far.
```
# LOAD PRE_PROCESSOR
from transformers import AdamW, TrOCRProcessor, VisionEncoderDecoderModel, get_scheduler
def load_processor() -> TrOCRProcessor:
return TrOCRProcessor.from_pretrained('gagan3012/TrOCR-Ar-Small')
load_processor()
```<|||||>@HebaGamalElDin I can reproduce the issue with the checkpoint `gagan3012/TrOCR-Ar-Small`, but not `microsoft/trocr-base-printed`. By looking the files in these 2 model repositories, I believe it is because `gagan3012/TrOCR-Ar-Small` doesn't contain the tokenizer files.
I think you can get the tokenizer from [microsoft/trocr-small-stage1](https://huggingface.co/microsoft/trocr-small-stage1). But it would be nice if you can leave a message or even open a PR in [gagan3012/TrOCR-Ar-Small](https://huggingface.co/gagan3012/TrOCR-Ar-Small) to upload the tokenizer files :-)<|||||>Hi @ydshieh and @NielsRogge Can we find the accuracy of the OCR model along with the character error rate CER? and how to do it. Thank you |
transformers | 14,883 | closed | Fix pytorch image classification example | Updates the `examples/pytorch/image-classification/run_image_classification.py` script to use the new Image feature to pass the failing [CI test](https://app.circleci.com/pipelines/github/huggingface/transformers/31642/workflows/b1aa40eb-2edc-4b54-963b-f11058072a7e/jobs/327951/artifacts) (I've also updated the `hf-internal-testing/cats_vs_dogs_sample` dataset on the Hub for that). | 12-22-2021 13:31:23 | 12-22-2021 13:31:23 | |
transformers | 14,882 | closed | core dumps run_onnx_exporter.py in gpu. | Hi,
when i use the scrips in https://github.com/huggingface/transformers/tree/master/examples/onnx/pytorch/summarization/run_onnx_exporter.py;
i change the code
ort_sess = onnxruntime.InferenceSession(new_onnx_file_path, providers=['CUDAExecutionProvider'])
i get core dumps.


And the scripts is ok on cpu.
My purpose is to get speed up in GPU for bart model.
environment:
Collecting environment information...
PyTorch version: 1.10.1+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (GCC) 8.2.0
Clang version: 3.8.0 (tags/RELEASE_380/final)
CMake version: version 3.16.0
Libc version: glibc-2.26
Python version: 3.7.12 (default, Sep 10 2021, 00:21:48) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-4.15.0-163-generic-x86_64-with-Ubuntu-18.04-bionic
Is CUDA available: True
CUDA runtime version: 11.2.142
GPU models and configuration:
GPU 0: Tesla K80
GPU 1: Tesla K80
GPU 2: Tesla K80
GPU 3: Tesla K80
Nvidia driver version: 470.82.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.21.5
[pip3] torch==1.10.1+cu113
@lewtun @NielsRogge
| 12-22-2021 11:39:35 | 12-22-2021 11:39:35 | Hi @lonelydancer thank you for raising this issue!
If I understand correctly, you changed [this line](https://github.com/huggingface/transformers/blob/13504dcbea231d2cae701d1ffdeb0810d62aff81/examples/onnx/pytorch/summarization/run_onnx_exporter.py#L147) in the `run_onnx_exporter.py` script as follows:
```python
ort_sess = onnxruntime.InferenceSession(new_onnx_file_path, providers=['CUDAExecutionProvider'])
```
I ran this but was not able to reproduce your error - could it be a problem with your CUDA version? (I was only able to test this on CUDA Version: 10.2)
One gotcha that I've experienced before is having both `onnxruntime` _and_ `onnxruntime-gpu` installed. Is it possible you still have the former installed in your environment? You could try running
```
pip uninstall onnxruntime
```
and then re-running the export script.<|||||>@lewtun
hello, i change the cuda version to 10.2, and i'm using onnxruntime-gpu. The scripts is ok.
but i calculate the time of the generate function( 0.10198545455932617s) and the ort_sess.run(0.12904071807861328s)
is it normal? I expect the onnx version will faster.
<|||||>Good to hear that it works on CUDA Version 10.2 @lonelydancer!
Regarding the runtime performance between the non-optimised and optimised models on GPU, I too would have expected the optimised version to run faster. One possibility is that the implementation of beam search in the `generate()` of `transformers` is somehow more efficient.
Perhaps @fatcat-z can comment here on whether he ever tested his BART implementation on a GPU and saw a performance gain?<|||||>@lewtun
i think i still have problem.
2021-12-23 08:53:56.216073982 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:535 CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Please reference https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met.<|||||>Could you please provide a minimal reproducible example @lonelydancer so I can test it myself?
Ideally, it would be good to have:
* The modified `run_onnx_exporter.py` script
* The CLI command used to export the model
* A small script where you run the model with ONNX Runtime<|||||>Hi, @lewtun
https://github.com/lonelydancer/algorithm/blob/master/run_onnx_exporter.py
transformers/examples/onnx/pytorch/summarization
python run_onnx_exporter.py --model_name_or_path facebook/bart-base --device=cuda

transformers 4.16.0.dev0
onnxruntime-gpu 1.10.0
onnx 1.10.2
torch 1.10.0+cu102<|||||>Hi @lewtun, I wanted to jump into the discussion here as I'm experiencing similar issues with an MBart model. CPU inference time is comparable for Pytorch and ONNXRuntime, with Pytorch being slightly faster. Additionally, I experienced similar core dumps when I tried to run the CPU-generated ONNX model on GPU (GTX3060 CUDA 11.4) and generating directly on GPU requires more than 12GB memory which is more than my gtx3060 has.
I saw you made it work on CUDA 10.2 and I wanted to ask if you or @lonelydancer managed to run this implementation with the TensorRT backend to still potentially gain that edge over the Pytorch GPU implementation? Attempting TensorRT backend on the CPU-generated model (with `onnx.shape_inference.infer_shapes_path` ran on it) currently results in this error for me:
`[E:onnxruntime:, inference_session.cc:1448 operator()] Exception during initialization: /onnxruntime_src/onnxruntime/core/providers/tensorrt/tensorrt_execution_provider.cc:925 SubGraphCollection_t onnxruntime::TensorrtExecutionProvider::GetSupportedList(SubGraphCollection_t, int, int, const onnxruntime::GraphViewer&, bool*) const [ONNXRuntimeError] : 1 : FAIL : TensorRT input: 3419 has no shape specified. Please run shape inference on the onnx model first. Details can be found in https://www.onnxruntime.ai/docs/reference/execution-providers/TensorRT-ExecutionProvider.html#shape-inference-for-tensorrt-subgraphs`
Is this something to be expected due to the beam search implementation requiring empty output nodes (?) or could this be related to the CUDA version?
Thanks!<|||||>@JeroendenBoef
i try the config of cuda10.2/onnxruntime-gpu 1.6 and cuda11.4/onnxruntime-gpu 10.0 in https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html; it doesn't work on gpu. i update my driver, it still not work.<|||||>@lewtun @LysandreJik @fatcat-z is there any solution? i get stuck by the problem. <|||||>Sorry for the slow reply @lonelydancer - I’m on leave for a few more days and will take a look at this issue next week.<|||||>@lewtun do you have time to see this issue? i really need your help.<|||||>Hi @lonelydancer, unfortunately I'm still not able to reproduce your error. Using your script and export command, the following test works for me:
```python
import numpy as np
from transformers import AutoTokenizer, AutoConfig
from onnxruntime import InferenceSession, SessionOptions
num_beams = 2
max_length = 5
model_ckpt = "facebook/bart-base"
onnx_file_path = "BART.onnx"
ARTICLE_TO_SUMMARIZE = "My friends are cool but they eat too many carbs."
def main():
# Prepare inputs
tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
config = AutoConfig.from_pretrained(model_ckpt)
inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors="pt")
# Create ORT session
options = SessionOptions()
ort_sess = InferenceSession(onnx_file_path, options, providers=["CUDAExecutionProvider"])
# Run inference
ort_out = ort_sess.run(
None,
{
"input_ids": inputs["input_ids"].cpu().numpy(),
"attention_mask": inputs["attention_mask"].cpu().numpy(),
"num_beams": np.array(num_beams),
"max_length": np.array(max_length),
"decoder_start_token_id": np.array(config.decoder_start_token_id),
},
)
print(f"ORT outputs: {ort_out}")
print("Success!")
if __name__ == "__main__":
main()
```
For reference, this is the output of `transformers-cli env`:
```
- `transformers` version: 4.15.0.dev0
- Platform: Linux-5.0.0-1020-gcp-x86_64-with-debian-buster-sid
- Python version: 3.7.11
- PyTorch version (GPU?): 1.10.1+cu102 (True)
```
I'm also running the following versions of the ONNX libraries:
```
onnx==1.10.2
onnxruntime-gpu==1.10.0
onnxruntime-tools==1.7.0
```
Finally, here's the output from `nvidia-smi`
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 430.26 Driver Version: 430.26 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2... Off | 00000000:00:04.0 Off | 0 |
| N/A 35C P0 37W / 300W | 0MiB / 16160MiB | 0% Default |
```
My suggestion would be to share a minimal reproducible example of the code you're using to run inference with ONNX Runtime. Without that, it is very hard to understand what is causing the issue.<|||||>@lewtun thank you very much.
may i know the time of the pytorch scripts and onnx?
is there "Failed to create CUDAExecutionProvider" in your log?
<|||||>Hi @lonelydancer here's the average latencies I get on CPU vs GPU with the example inputs I shared above:
```
CPU
Average latency (ms) - 122.95 +\- 6.40
GPU
Average latency (ms) - 153.27 +\- 1.55
```
So indeed, one gets slightly faster inference on CPU vs GPU. To get these numbers, I first exported the models as follows:
```bash
# Export CPU model
python run_onnx_exporter.py --model_name_or_path facebook/bart-base --device=cpu --output_file_path=bart_cpu.onnx
# Export GPU model
python run_onnx_exporter.py --model_name_or_path facebook/bart-base --device=cuda --output_file_path=bart_gpu.onnx
```
And then ran them through the following script:
```python
from time import perf_counter
import numpy as np
from tqdm.auto import tqdm
from onnxruntime import InferenceSession, SessionOptions
from transformers import AutoConfig, AutoTokenizer
num_beams = 2
max_length = 5
model_ckpt = "facebook/bart-base"
onnx_file_path = "bart_cpu.onnx" # Change to `bart_gpu.onnx` for CUDA inference
ARTICLE_TO_SUMMARIZE = "My friends are cool but they eat too many carbs."
def main():
# Prepare inputs
tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
config = AutoConfig.from_pretrained(model_ckpt)
inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors="pt")
# Create ORT session
options = SessionOptions()
ort_sess = InferenceSession(onnx_file_path, options, providers=["CPUExecutionProvider"]) # Change to `CUDAExecutionProvider` for CUDA inference
# Run inference
latencies = []
for _ in tqdm(range(100)):
start_time = perf_counter()
ort_out = ort_sess.run(
None,
{
"input_ids": inputs["input_ids"].cpu().numpy(),
"attention_mask": inputs["attention_mask"].cpu().numpy(),
"num_beams": np.array(num_beams),
"max_length": np.array(max_length),
"decoder_start_token_id": np.array(config.decoder_start_token_id),
},
)
latency = perf_counter() - start_time
latencies.append(latency)
# Compute run statistics
time_avg_ms = 1000 * np.mean(latencies)
time_std_ms = 1000 * np.std(latencies)
print(f"Average latency (ms) - {time_avg_ms:.2f} +\- {time_std_ms:.2f}")
print(f"ORT outputs: {ort_out}")
print("Success!")
if __name__ == "__main__":
main()
```
I do not see any `Failed to create CUDAExecutionProvider` errors in my logs, which suggests there is an environment issue with your ONNX Runtime installation not finding CUDA.
If you wish to run fast summarisation with `transformers` and ONNX Runtime, I suggest opening a feature request in our `optimum` [library](https://github.com/huggingface/optimum). The `transformers` library is only concerned with _exporting_ models to ONNX, while `optimum` is responsible for _optimising_ these exports. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,881 | closed | [AutoProcessor] Correct AutoProcessor and automatically add processor… | … class.
This PR makes sure that all preprocessors, when saved automatically save the corresponding class to `preprocessing_config.json` so that they can always be loaded from `AutoProcessor` afterward. E.g. the following should always be possible:
```python
auto_processor.save_pretrained("./")
processor = AutoProcessor.from_pretrained("./")
``` | 12-22-2021 11:38:26 | 12-22-2021 11:38:26 | Error is unrelated<|||||>> Before being able to merge this PR, you will need to rebase and fix the docstrings. If we go with it that is.
>
> I have a general issue with feature extractors and processors sharing the same config file. It implies every processor will always have a feature extractor but different modalities in the future might make us do processors with different components, none of them being a feature extractor.
>
> Then why is the processor class saved in the feature extractor config and not the tokenizer config? This is all very weird and I think we should just have a separate processor config which would avoid passing the processor_class to a `sace_pretrained` call. This feels a bit too much like a hack.
I understand your view and I also don't think it's very clean, but I think it's better than the alternative because:
- 1.) One [can already load the preprocessor from the feature extractor](https://github.com/huggingface/transformers/blob/13504dcbea231d2cae701d1ffdeb0810d62aff81/src/transformers/models/auto/processing_auto.py#L156) which already made me adapt a lot of feature extractor configs: https://huggingface.co/microsoft/wavlm-large/blob/main/preprocessor_config.json . If we can load it from the feature extractor, then why not save it? Reverting this behavior is already quite a breaking change by now.
- 2.) I don't think a `processor.json` config will every have any real configuration arguments. IMO it'll always stay a wrapper for tokenizers, feature extractors, language models, ... => So IMO it's really just the `"processor_class"` argument that we need to save somewhere
- 3.) We also save `tokenizer_class` in the model's config (I guess that is a bit different because `config.json` is more of an "overall" configuration. I would also be fine with having a `processor_class` in `tokenizer_config.json` as well BTW
-4.) The naming of `FeatureExtractor` is maybe general enough to use it as a base class for all future processors of whatever modality? Thinking of Video, TextToSpeech, ....
- 5.) The problem I'm having with creating a config class for a processor is the following:
- Do we add the whole config of the feature extractor and the tokenizer_json.config to the processor config as well? Then we will have duplicated configs in model repos (we can't really remove existing config from repos with a huge breaking change). It'll be a mess to have duplicated configs IMO. If we don't add the tokenizer and feature extractor config then we can't load a processor just from it's config and the config will IMO only ever have `processor_class` as an argument.
What do you think @sgugger ? Also curious to hear other thoughts on this. <|||||>I think the role of processors isn't well defined enough to take a decision, and I think we don't necessarily align on their purpose.
If a `Processor` is an umbrella over tokenizers and feature extractors, in that the following are possible:
```
Processor
| Tokenizer
| Feature extractor
Processor
| Feature extractor
Processor
| Tokenizer
```
Then I think that there should either be a `processor_config.json` which saves the class of the `Processor`, or that the processor class should be saved in all its dependants (tokenizer and/or feature extractor). The latter being not adding a new file (yay!) but it isn't necessarily the cleanest.
If that's not the case, and `Processor` is really an umbrella just over the feature extractor, so that only the following is possible:
```
Processor
| Feature extractor
| Tokenizer
Processor
| Feature extractor
```
Then I understand why you'd want to only save it in the feature extractor configuration. But if that's the case, I wonder why there's a need to have a `Processor` at all, and why the feature extractor couldn't just contain a `tokenizer` alongside everything else that it does. If processor == feature extractor but serves as an additional abstraction so that the API is cleaner, then I also understand where you're coming from @patrickvonplaten.
<|||||>> 1.) One can already load the preprocessor from the feature extractor
I don't think that's always possible. It works for Wav2Vec2-style models where the tokenizer is only used to decode the outputs, but for some model, the feature extractor **and** the tokenizer are both needed. So in that case, loading from just the feature extractor will fail.
> 2.) I don't think a processor.json config will every have any real configuration arguments. IMO it'll always stay a wrapper for tokenizers, feature extractors, language models, ... => So IMO it's really just the "processor_class" argument that we need to save somewhere
I agree with that comment, and it also does not make sense to put everything in `processor.json` since we want to be able to load the feature extractor/tokenizer separately. In this case the `processor_class` should be saved in the tokenizer config, feature extractor config and model config if accessible, just to make sure the info is widely available.
But this only works for as long as we have processors that are just grouping together some modality-specific processing components and do not need any config of their own. Maybe in the future we will have some processors that need a config? Not sure. We can certainly re-open this debate only when the case arises and try to avoid a processor config for now.
> 4.) The naming of FeatureExtractor is maybe general enough to use it as a base class for all future processors of whatever modality? Thinking of Video, TextToSpeech, ....
I think we said we would keep feature extractor modality-specific and that processors are specifically multi-modal, so we will always need processors.
So to summarize: okay for now with this PR but we also save the `processor_class` in the tokenizer config (and look for it there if we don't find a feature extractor). Note that the `save_pretrained` method of all the processors could also be abstracted in a base class if we keep the list of "processing_blocks" names (for now almost always tokenizer and feature_extractor) in a class attribute.
Last bit: to remove the hack of passing a processor_class to the `save_pretrained` methods, can we just add it as an attribute of the feature extractor/tokenizer config? It doesn't make sense for either `FeatureExtractorMixin.save_pretrained` or `PreTrainedTokenizerBase.save_pretrained` to have this argument.<|||||>Added it to the tokenizer config now as well :-)<|||||>> Looks good! Let me know if you need an extra pair of hands for updating the old configs :)
Think it's fine to leave them as is :-)<|||||>Merging this as it's blocking me and I think @LysandreJik is fine with the solution as well.
@LysandreJik, please let me know if you would like to change something here before the next release. |
transformers | 14,880 | closed | Add (M)Luke model training for Token Classification in the examples | # What does this PR do?
This PR adds the possibility to train the (M)Luke model for a Token Classification task with the accelerate package. It also adds a tiny update to give the possibility to train over multiple datasets configuration, for example being able to concatenate multiple languages of the XTREME PAN-X dataset and train the model over it.
One can easily test with the command:
```
accelerate launch run_ner_no_trainer.py --model_name_or_path studio-ousia/mluke-base --dataset_name xtreme --dataset_config_name PAN-X.fr,PAN-X.en --output_dir /data/mluke/ --task_name ner --return_entity_level_metrics
```
/cc @sgugger @LysandreJik
| 12-22-2021 10:50:34 | 12-22-2021 10:50:34 | Hi there, thanks a lot for you PR! It will be great to be able to fully use LUKE and mLUKE for token classification!
Now the issue is that we try to keep each example pretty simple so that users can easily tweak and customize them. Adding this in the `run_ner_no_trainer` example adds a lot of complexity so how about we make this into its own example instead? It could go into a research project of its own.
Same for the data collator. It's very specific to LUKE so it should be added in a module file in the same folder as the example instead of adding this to the main lib directly, I think.<|||||>Sure! I will rethink how to properly reorder the things and will let you know once pushed!<|||||>Awesome, thanks a lot for adding this! Also cc'ing the original authors, @ikuyamada @Ryou0634.
I completely agree with Sylvain here, adding it to the existing `run_ner.py` script would make it confusing for people that want to leverage BERT/RoBERTa/etc. <|||||>@sgugger I moved everything into a dedicated folder that is only focus on Luke.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, let's finish this PR and merge it!
@jplu are you able to rebase with master?<|||||>Hey @jplu, thanks for your PR!
I'd just move it to `examples/research_projects` to have a specific LUKE example, vs having a `luke` folder in the `examples` folder :)<|||||>Sorry for the late reply. I will update accordingly to what you asked ASAP.<|||||>Ok, done on my side!<|||||>Done! Let me know if something is missing. |
transformers | 14,879 | closed | Fix Perceiver code example | # What does this PR do?
This PR fixes the code example of the multimodal Perceiver model.
Fixes #14870 | 12-22-2021 09:34:10 | 12-22-2021 09:34:10 | |
transformers | 14,878 | closed | Model trains on 1 node 8xA100 but hits OOM in 4 nodes 8xA100 | Hello,
I am attempting to train a model on 32 A100s split into 4 nodes. The queueing system is SLURM. When training on a single node, I can use all 8 GPUs with a batch size of 8:
`python -m torch.distributed.launch --nproc_per_node=8 ./script.py --model_name_or_path gpt2-large --train_file sample.txt --tokenizer_name embeddings --do_train --do_eval --output_dir output/ --evaluation_strategy steps --eval_steps 5000 --num_train_epochs 120 --per_device_train_batch_size 8 --cache_dir .cache2/ --bf16 --gradient_checkpointing True --save_total_limit 2 --learning_rate 1e-05`
But when switching to multi-node, it hits OOM even with a batch size of 1. Here's the script:
```
#SBATCH --time=23:59:00
#SBATCH --gres=gpu:a100:8
#SBATCH --export=NONE
#SBATCH --partition=a100
#SBATCH --nodes=4
export MASTER_PORT=12340
echo "NODELIST="${SLURM_NODELIST}
master_addr=$(scontrol show hostnames "$SLURM_JOB_NODELIST" | head -n 1)
export MASTER_ADDR=$master_addr
export RANK_NODE=$SLURM_NODEID
SLAVES=`scontrol show hostnames $SLURM_JOB_NODELIST | grep -v $MASTER_ADDR`
HOSTLIST="$MASTER_ADDR $SLAVES"
unset SLURM_EXPORT_ENV
source /path/to/py37/bin/activate
RANK=0
for node in $HOSTLIST; do
ssh -q $node;
python -m torch.distributed.launch --nproc_per_node=8 --nnodes=4 --node_rank=$RANK --master_addr=$MASTER_ADDR
--master_port=$MASTER_PORT ./5.run_clm-post.py --model_name_or_path gpt2-large --train_file sample.txt
--tokenizer_name embeddings --do_train --do_eval --output_dir output/ --evaluation_strategy steps --eval_steps 5000
--num_train_epochs 120 --per_device_train_batch_size 1 --cache_dir .cache2/ --bf16 --gradient_checkpointing True
--save_total_limit 2 --learning_rate 1e-05 & RANK=$((RANK+1));
done
wait
```
Funnily enough, I get the CUDA OOM error 32 times, instead of 4, so I guess whether it's treating each GPU independently.
Note: I had to switch the backend to `gloo` because got nccl errors.
I will try now with deepspeed + bf16, I've been working on this setup for a couple of days and don't seem to get anywhere.
Any help is appreciated.
Best
| 12-22-2021 05:42:29 | 12-22-2021 05:42:29 | This may be of interest to @stas00 as I believe he has quite a bit of experience with this kind of setups.<|||||>I haven't encountered this specific issue, so first let's try to understand what is going on here.
> Note: I had to switch the backend to gloo because got nccl errors.
and did you test that one node was still working OK with `gloo` after switching to it?
What nccl errors were you getting?
> Funnily enough, I get the CUDA OOM error 32 times, instead of 4
each process gets OOM in your case, why would you expect 4 errors and not 32?
------------
unrelated, have a look at how we launch similar jobs on BigScience:
https://github.com/bigscience-workshop/bigscience/blob/master/train/tr8-104B-wide/tr8-104B.slurm
you can let `srun` handle the ssh'ing for you. But perhaps your SLURM env is slightly different. Ours is JeanZay HPC.
it probably doesn't matter, just thought I'd share how I launch these.<|||||>Thanks for the prompt response and the `srun` example, I really appreciate it! I tried with `srun` but it didn't work, so I had to ssh. But the cluster I am using is still under beta version, so I will get in contact with them and try again.
But first. I am now directly logging into the nodes to rule out SLURM errors.
Using `nccl`, the command above runs on a single node. But when running on multiple nodes, it hangs indefinitely. It stops here:
```
FutureWarning,
WARNING:torch.distributed.run:
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
```
And after some time, I get the following error:
```
File "/path/to/py37/lib/python3.7/site-packages/torch/distributed/elastic/rendezvous/static_tcp_rendezvous.py", line 61, in next_rendezvous
multi_tenant=True,
RuntimeError: connect() timed out. Original timeout was 900000 ms.
```
When I send it through SLURM, I get a RuntimeError as well. This is why I switched to `gloo`...<|||||>Sounds like a firewall issue then. The slave nodes try to connect to the master node and fail to.
You can install `py-spy` and run:
```
py-spy dump --pid PID
```
and it will dump the stack trace of the hanging process PID (on some systems `sudo` is required)
but it's almost certain you have a firewall issue there.
I have the same issues if I try to connect to an external world from a gpu node, e.g. to fetch `transformers` or `datasets` files.
So we basically devised this solution for the latter:
1. download all that is needed and cache by running the code on a node with internet access
2. then run the code on a node after setting:
```
export HF_DATASETS_OFFLINE=1
export TRANSFORMERS_OFFLINE=1
```
you will most certainly want this for your normal work.
but your issue is inter-node firewall.
Can you ssh from one gpu worker node to another? This is most likely where your problem is. If you can ssh then the port elastic is trying to sync on is firewalled. Ask your sysadmins which port is safe to use and set it explicitly in your launcher command line.
> Thanks for the prompt response and the srun example, I really appreciate it! I tried with srun but it didn't work, so I had to ssh. > But the cluster I am using is still under beta version, so I will get in contact with them and try again.
It'll make things much simpler if you could make `srun` to work. That's the whole point of SLURM - those things are designed to remove the complexity.<|||||>Here is a simple test script for you to work with rather than a big application.
```
# skeleton script for .launch
# test.py
import torch.distributed as dist
import argparse
import torch
parser = argparse.ArgumentParser()
parser.add_argument("--local_rank", type=int)
args = parser.parse_args()
torch.cuda.set_device(args.local_rank)
device = torch.device("cuda", args.local_rank)
dist.init_process_group("nccl")
dist.all_reduce(torch.ones(1).to(device), op=dist.ReduceOp.SUM)
dist.barrier()
# to run
python -m torch.distributed.launch --nproc_per_node=2 test.py
```
adjust `--nproc_per_node` to your setup. and if needed bolt `srun` to it. otherwise this is just for a single node as is.
it doesn't print anything, we just want it not to fail ;)
Here is the same for the new `torch.distributed.run` API instead, which requires a slightly different setup
```
# skeleton script for .run
import torch.distributed as dist
import torch
import os
local_rank = int(os.environ["LOCAL_RANK"])
torch.cuda.set_device(local_rank)
dist.init_process_group("nccl")
dist.barrier()
dist.get_world_size()
dist.is_available()
dist.get_rank()
# to run
python -m torch.distributed.run --nproc_per_node=2 test.py
```
but `launch` should be just fine.<|||||>Thanks a lot for the great response!
I installed `py-spy` and had a look at the stack trace of the command, but I am not sure if I am gaining much information. Here is the output.
```
Thread 0x1544E69E5740 (active+gil): "MainThread"
next_rendezvous (torch/distributed/elastic/rendezvous/static_tcp_rendezvous.py:61)
_rendezvous (torch/distributed/elastic/agent/server/api.py:538)
wrapper (torch/distributed/elastic/metrics/api.py:125)
_initialize_workers (torch/distributed/elastic/agent/server/api.py:678)
wrapper (torch/distributed/elastic/metrics/api.py:125)
_invoke_run (torch/distributed/elastic/agent/server/api.py:837)
run (torch/distributed/elastic/agent/server/api.py:709)
wrapper (torch/distributed/elastic/metrics/api.py:125)
launch_agent (torch/distributed/launcher/api.py:252)
__call__ (torch/distributed/launcher/api.py:131)
run (torch/distributed/run.py:713)
launch (torch/distributed/launch.py:174)
main (torch/distributed/launch.py:189)
<module> (torch/distributed/launch.py:193)
_run_code (runpy.py:85)
_run_module_as_main (runpy.py:193)
```
It must then be a firewall issue. I already downloaded the dataset and transformer on a node with access and cached everything for the `slurm` nodes. I didn't know I had to export those variables, thank you! I've asked the system administrators about a safe port to use. I'm however able to run deepspeed on several nodes (and benchmarking that at the moment); perhaps because pdsh is able to find a port?
Thanks for the `srun` example, it will come in handy!
<|||||>Yes, `HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1` simply allows you to continue using the online specification for the models and datasets instead of the local paths, as it instructs the software to use the cache and not look anything up, but your way is just as fine.
The `deepspeed` launcher actually doesn't work on JeanZay, that's why I'm using `srun` + pytorch launcher. Perhaps your situation is reversed.
BTW, the `deepspeed` launcher can be used as a drop in replacement for the pytorch launcher - you don't have to use Deepspeed in the rest of your code. And it has quite a few goodies in it. So yes, perhaps `pdsh` solves the issue. I have never used it, other than being aware of it, so can't add further commentary on it.
And additionally the new `elastic` launcher that replaced the old one in pt-1.9 also has a variety of connection methods, you might want to experiment with those as well. https://pytorch.org/docs/stable/distributed.elastic.html<|||||>Thanks for your response.
I'm going to switch to `torchrun` and give it a try at the connection options in your link.
Since this seems a very particular issue of my own, I'll wait to see what the systems administrators report and write here the solution once I get it running so that maybe it's helpful to someone - then after that close the issue. But of course feel free to close already, since this does not seem to be an issue _per se_.<|||||>But we also want to check that the OOM issue goes away once nccl is restored. Unless you have already validated it to be so by using the `deepspeed` launcher, then yes, it'd be safe to close.
and yes sharing your solution would definitely be helpful to others, @aqred1<|||||>It was indeed a problem of ports not being opened. In the end I'm using deepspeed so I haven't had the chance to encounter the OOM issue again. Thanks for your great help, I'm closing the issue. |
transformers | 14,877 | closed | dataset of Helsinki-NLP/opus-mt-en-zh | Hi, thanks for your model. I have two questions of the train datasets of opus-mt-en-zh.
https://github.com/Helsinki-NLP/Tatoeba-Challenge/blob/master/data/README-v2021-08-07.md
English - Chinese eng-zho 10390 | 43075 | 129323178
Middle English (1100-1500) - Chinese enm-zho
In this website, there are two datasets from en to zh. Which is the dataset of opus-mt-en-zh?
When fine-tuning the model, does it need to add ">>cmn_Hans<< " before train_src? | 12-22-2021 03:50:36 | 12-22-2021 03:50:36 | The README here mentions `eng-zho`, if I'm not mistaken: https://huggingface.co/Helsinki-NLP/opus-mt-en-zh<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,876 | closed | Add XGLM models | # What does this PR do?
This PR adds the XGLM model: [code](https://github.com/pytorch/fairseq/tree/main/examples/xglm), [paper](https://arxiv.org/abs/2112.10668) | 12-22-2021 02:13:13 | 12-22-2021 02:13:13 | |
transformers | 14,875 | closed | Add `in_chans` to `DetrModel` | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Add `in_chans` parameter to `DetrConfig` and corresponding models
## Motivation
I find myself wanting to test DETR on data that have only one channel (it can be grey scale images or spectrograms in my case). But to make it work I have to use hacks, that may eventually work purely.
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
I will be happy to open PR with this if this is as useful as I see it. Maybe some other improvements that can be incorporated?
UPADTE: I think this will involve also adding parameters to disable `replace_batch_norm` and very specific part of setting `parameter.requires_grad_(False)` to all parameters in resnet except layer2-4
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| 12-22-2021 01:28:13 | 12-22-2021 01:28:13 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I also think it could be useful. I don't want to waste memorys by convert 1 channel image to 3 channels. |
transformers | 14,874 | closed | Fix doc mistakes | # What does this PR do?
Fix last issues to have the doc compile again:
- double return blocks in VisualBert, TFTapas and Perceiver
- some weird special characters in LXMert and an empty return block
Merging right now to unlock the doc building. | 12-21-2021 23:35:40 | 12-21-2021 23:35:40 | |
transformers | 14,873 | closed | Fix `FlaxMarianMTModel` return block. | # What does this PR do?
Completes #14872 and fixes `FlaxMarianMTModel`
cc @patrickvonplaten | 12-21-2021 22:40:38 | 12-21-2021 22:40:38 | Thanks a lot! |
transformers | 14,872 | closed | Fixes in marian doc | # What does this PR do?
Fixes a few issues in the docstring for the Marian model. | 12-21-2021 22:11:43 | 12-21-2021 22:11:43 | |
transformers | 14,871 | closed | Fix FLAX_MULTIPLE_CHOICE_SAMPLE typo | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-21-2021 21:44:06 | 12-21-2021 21:44:06 | |
transformers | 14,870 | closed | Error while reproducing example for PerceiverForMultimodalAutoencoding | ## Environment info
- `transformers` version: 4.14.1
- Platform: Linux-5.8.0-63-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.6
- PyTorch version (GPU?): 1.9.0+cu111 (True)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@NielsRogge
Trying to reproduce [example](https://github.com/huggingface/transformers/blob/e51c7b5872785a74a03c011732173757d7c216c4/src/transformers/models/perceiver/modeling_perceiver.py#L1888) for `PerceiverForMultimodalAutoencoding` and getting:
```
>>> from transformers import PerceiverForMultimodalAutoencoding
2021-12-21 23:44:38.796858: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
>>> import torch
>>> images = torch.randn((1, 16, 3, 224, 224))
>>> audio = torch.randn((1, 30720, 1))
>>> inputs = dict(image=images, audio=audio, label=torch.zeros((images.shape[0], 700)))
>>> model = PerceiverForMultimodalAutoencoding.from_pretrained('deepmind/multimodal-perceiver')
>>> outputs = model(inputs=inputs)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/tolik/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/tolik/anaconda3/lib/python3.7/site-packages/transformers/models/perceiver/modeling_perceiver.py", line 1912, in forward
return_dict=return_dict,
File "/home/tolik/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/tolik/anaconda3/lib/python3.7/site-packages/transformers/models/perceiver/modeling_perceiver.py", line 909, in forward
inputs, modality_sizes, inputs_without_pos, subsampled_points=subsampled_output_points
File "/home/tolik/anaconda3/lib/python3.7/site-packages/transformers/models/perceiver/modeling_perceiver.py", line 2429, in decoder_query
[embed(modality, decoder_queries[modality]) for modality in sorted(self.modalities.keys())], dim=1
File "/home/tolik/anaconda3/lib/python3.7/site-packages/transformers/models/perceiver/modeling_perceiver.py", line 2429, in <listcomp>
[embed(modality, decoder_queries[modality]) for modality in sorted(self.modalities.keys())], dim=1
File "/home/tolik/anaconda3/lib/python3.7/site-packages/transformers/models/perceiver/modeling_perceiver.py", line 2424, in embed
pos = torch.broadcast_to(pos, [x.shape[0], x.shape[1], self.num_query_channels - x.shape[2]])
RuntimeError: The expanded size of the tensor (833) must match the existing size (831) at non-singleton dimension 2. Target sizes: [1, 704, 833]. Tensor sizes: [1, 831]
```
| 12-21-2021 20:59:11 | 12-21-2021 20:59:11 | |
transformers | 14,869 | closed | Fine-tuning Wav2Vec2: Concern that pretrained weights are being reinitialized | ## Environment info
- OS Type: Google Colab
- 'transformers' version: 4.14.1
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyTorch version (GPU?): 1.10.0+cu111 (True)
- Tensorflow version (GPU?): 2.7.0 (True)
### Who can help
Anyone with knowledge of fine-tuning Wav2Vec2 but specifically @patrickvonplaten, @anton-l
## Information
Model I am using: wav2vec2-large-960h-lv60
The tasks I am working on is:
* My own task or dataset: fine-tuning wav2vec2 on technical jargon
Hi! Hope this message finds the reader well. I've been attempting to fine-tune Wav2Vec2 on a new domain where highly technical jargon is used in a noisy environment. Out-of-the-box with a language model I'm getting fairly good WER scores but my goal is to help the model be more robust to background noise interference.
The problem arises when using:
* The official example scripts
My code is roughly based off of Patrick von Platen's [Fine-Tune Wav2Vec2 for English ASR with Huggingface Transformers](https://huggingface.co/blog/fine-tune-wav2vec2-english) blog post. It's been slightly modified to handle my own data format but the training pipeline has not been altered.
## Description of Behavior
Our data is still coming in so we don't have a lot to work with (<7 min so far) but when I fine-tune on what little we do have to evaluate initial performance, it appears like the model is starting from scratch (no pre-training) as it no longer predicts words but random letter sequences. I was concerned that maybe my learning rate (or noisy environment) was the cause of the issue so I did a series of small experiments with a clean librispeech subsample of ten audio files (train size = 10, test size = 10), lowering the learning rate for each experiment but all the results were identical to the following:
Average audio file duration: 13.34 seconds
Average WER: 100%
TARGET: IN HIS MORE SOBER MOMENTS HE WAS NOT ALWAYS ABLE TO ASSUME THAT APPEARANCE OF EQUALITY WITH HIS COMPANIONS WHICH IT WAS THE AMBITION OF HIS SOUL TO ACHIEVE BUT A SECOND GLASS OF WHISKY AND WATER
PREDICTION: TOETITSTOTZTITBTHT'TKTITZTHTATKT'TITBTHTBTKETMTZITSTKTITJTNTZITETHTMTITNTYTJTNGTZTITNTATYTKITMTHTITNZTZTLTBKTITMSTNTMITNXTXTKTNT'TNETPTKITHTRTITKTTLTNTYTOTMTGTITJTOMSITSTOTZTITPTHBTXTNTETOTHTETZITJSTOTPSITOMTITJTNTZTITMSTKTITNBTATOTMTOTHETITHTRITSTOTZTITZTHLTYTITMTHTITNTPTSTOTKTCTKITATLTMITNTITZTKTPTHETFTIT TYTNTZTZTITHRTITJSTOTZTUTGTITNETFITJTNTMTK'TIT
I then thought maybe something was being re-intialized that should not be. I traced where the linear head was initialized and where the pre-trained weights were loaded in the Huggingface Transformers repository but saw nothing out of place. I also followed the learning rate value all the way to training then during training to see if the learning rate changes were not being kept and that was why changing the learning rate had no effect. Again, nothing looked incorrect and the new learning rate was stored consistently across the different scripts.
I then wondered what would happen if more data was added since others (for example [this one](https://www.tensorflow.org/hub/tutorials/wav2vec2_saved_model_finetuning)) were saying smaller datasets were experiencing these kind of random predictions. So, I incrementally added more librispeech samples and increased the number of epochs until I reached roughly 15-20 minutes of data and 30 epochs at which point the model started overfitting:
TARGET: BUT STOPPED SHORT STUPEFIED AND FRIGHTENED WITHIN THREE STEPS OF HATTERAS WHO STARTED UP THAT MOMENT AND THROWING OFF HIS DISGUISE KNELT ON ONE KNEE AND AIMED STRAIGHT AT THE BEAR'S HEART
PREDICTION: BUT STOPPED SHORT STUPEFIED AND FRIGHTENED WITHIN THREE STEPS OF HATTERAS WHO STARTED UP AT THAT MOMENT AND THROWING OFF HIS DISGUISE KNELT ON ONE KNEE AND AIMED STRAIGHT AT THE BEAR'S HEART
Based on this it seems that more gradient steps (roughly 30000) plus a lot more data is needed in order to have any good results...but this seems odd to me. I understand that, with so little data, it shouldn't learn much about the new domain yet but shouldn't it also not lose all the pre-trained knowledge within a few gradient steps and with a conservative learning rate?
## Questions:
1. Is it purely my own misunderstanding on how fine-tuning works for Wav2Vec2? My assumption was that the training would start from the pre-trained checkpoint and therefore should already have some knowledge about speech and English from the start of fine-tuning. Are the predictions initially random due to the new LM head established in the Wav2Vec2ForCTC class (in other words is this normal)? If this is abnormal then what can I do to resolve this issue?
2. Why does altering the learning rate not have any impact if this is a normal behavior?
## To reproduce
My Google Colab notebook and the sample data used (audio, CSVs) are included here in a zip file. All you need to do is:
1. Unzip it in your Google drive at the directory /content/drive/MyDrive so you have the correct path /content/drive/MyDrive/demo_data with the audio files and CSVs within it.
2. Run every cell in the Colab Notebook
3. The last cell will show you predictions from the model
This might be an obvious question but I've been struggling with it for a while so I really appreciate any insight or clarification anyone can give me on it! Thanks in advance.
[demo_data-coulter.zip](https://github.com/huggingface/transformers/files/7757475/demo_data-coulter.zip)
| 12-21-2021 19:36:07 | 12-21-2021 19:36:07 | Hey @rcoulter13,
This issue is quite difficult to debug for us since it's not an obvious bug but deals with the model having problems with continual learning. It's quite a common phenomena that fine-tuned models have catastrophic forgetting when they are re-finetuned on out of distribution data<|||||>There are a ton of reasons why the fine-tuning might not work. Some include:
- the target labels (text) includes many out-of-vocabulary tokens. The wav2vec2 Librispeech models are all fine-tuned on very clean data that has all punctuation removed and usually on capitalized letters. You should check that this is also the case for your data
- Maybe freezing the whole base model can help mean ing to set `require_grad` to False for all layers except the `lm_head`
- Other hyper-parameters for training
- One should also make sure that the input data is correctly sampled at 16kHz
<|||||>Also note that the official blog post is based on TIMIT which is a very small and clean dataset<|||||>Hi @patrickvonplaten,
Thank you for addressing my issue so quickly and for your detailed suggestions! I appreciate your time.
1. My target labels indeed match the original target labels, all punctuation has been removed and capitalization is all uppercase - I had the same thought so I have already double-checked this.
2. I have an assertion statement that checks that all the input data is 16kHz and I've also manually checked the data with SoX
3. I hadn't thought of freezing the entire model except the lm_head but that's an excellent idea - thank you!
So, just to make sure I understand, you're saying the behavior in my model isn't standard? Part of me was wondering if it simply just requires that many epochs for the new head to learn anything and then for it to update the rest of the model. But you're saying we should be seeing much better results much earlier in training with the pre-trained weights then?<|||||>To be honest, I've never continued fine-tuned for a speech model yet, so I really don't know what to expect here. I usually throw away the `lm_head`, keep the base model, randomly initialize the `lm_head` and fine-tune the new model on the new data. But it might make sense to continue fine-tuning in your case here.
Gently pinging some other people here in case they have continued fine-tuning for Wav2Vec2 and might have an idea @anton-l @flozi00 @jonatasgrosman<|||||>Good to know, I'll definitely try your suggestion of freezing the entire model minus the lm_head and see where that takes me. Thanks again for your help and happy holidays!<|||||>Just wanted to followup for anyone else who might run across this. I ended up not having to freeze the whole base model. My problem was within my processor. For my task I only needed to use the vocab Wav2Vec2 was trained on (I'm not adding any new tokens) so I saw the immediate benefit of the pre-trained model on the first epoch of fine-tuning when I made sure the vocabulary and its corresponding numeric value were identical to Wav2Vec2's pre-trained vocab (i.e. ```vocab = {"[PAD]" : 0, "<s>" : 1 ... "Z" : 31}```). One easy way to check if yours matches is to save the pre-trained vocab to a local directory and view the vocab.json file. So if the vocab is identical and the data is clean (quiet environment) one should see the benefit of the whole base model on epoch 1 and improvement from fine-tuning within 10-20 epochs. If one isn't seeing this behavior then there might be a problem with the labels or the training data similar to what I experienced. Of course, it might be something else entirely but I'd check these two scenarios first. Hope this is helpful. |
transformers | 14,868 | closed | Adds IBERT to models exportable with ONNX | Adds IBERT to models exportable with ONNX | 12-21-2021 18:31:35 | 12-21-2021 18:31:35 | Hey @MaximovaIrina we recently switched all of our documentation from RST to MDX, so you can rebase on `master` to eliminate the conflict with `docs/source/serialization.rst`<|||||>Checking slow tests for ibert

<|||||>Hey @MaximovaIrina would you mind rebasing your branch on `master` and resolving the merge conflicts?
Gently pinging @LysandreJik for his blessing on this PR too :)<|||||>Could you just run the code quality tool to ensure that the code quality passes? You can install them with the following, from the root of your clone:
```
pip install -e ".[quality]"
```
And then run them with:
```
make fixup
```<|||||>Merging this since all the CI checks now pass and we have approval from a core maintainer. Thank you for your contribution @MaximovaIrina 🤗 ! |
transformers | 14,867 | closed | Keras metric callback | Hey, here's the unfinished Keras metric PR, co-authored with @merveenoyan!
Still to do:
1) The outputs from predict/generate are not being concatenated properly, this will be quick to fix! (Edit: Done)
2) I haven't tested prediction with generate at all. `generate()` is still very slow, but that's a separate PR :fearful:
3) More broad testing with multiple metric functions. | 12-21-2021 17:18:11 | 12-21-2021 17:18:11 | I feel like it makes sense to test on three notebooks. (could be overkill so you decide)
- Token classification
- Translation
- Masked language modeling
If you approve I’ll test for these ones. I wonder how it would look like with multiple metrics so in one of those I could test with multiple metrics (prolly token classification one)
@Rocketknight1 <|||||>@LysandreJik Absolutely yes, that's getting added! |
transformers | 14,866 | closed | Mass conversion of documentation from rst to Markdown | # What does this PR do?
This PR treats a lot of docstrings and converts them from rst to Markdown. All of the remaining one.
It also includes a new repo consistency check that will detect someone does not add back some rst docstrings behind my back. 😈 | 12-21-2021 17:04:15 | 12-21-2021 17:04:15 | As seen with @LysandreJik, merging and inspecting the result in the doc. Failure is unrelated. |
transformers | 14,865 | closed | Convert model files from rst to mdx | First pass! | 12-21-2021 16:58:22 | 12-21-2021 16:58:22 | |
transformers | 14,864 | closed | Add Flax image captioning example | # What does this PR do?
Add `run_image_captioning_flax.py` (modified from `run_summarization_flax.py`).
## Who can review
Examples: @patil-suraj + cc @patrickvonplaten @NielsRogge @sgugger for info
| 12-21-2021 16:51:12 | 12-21-2021 16:51:12 | @patil-suraj , @sgugger
Thank you for the comments. I will make the example much simpler by only supporting loading pre-trained vision encoder and text models.
About casting jax array to np arrary, there is a significant slow down (at least, in image examples) when using jax array as indices for accessing `datasets.Dataset`. I will re-produce the timing comparison.
<|||||>Hi, @patil-suraj @sgugger
I simplified the config/model initialization parts (only support loading pretrained encoder & decoder).
--------------------
For @patil-suraj
About using `numpy` array instead of `jnp.array` when it comes to `datasets`,
For this line
https://github.com/huggingface/transformers/blob/650fb4aa2adfe746f8c3e1aec1d601c4a0e9f40c/examples/flax/image-captioning/run_image_captioning_flax.py#L854
takes 30 seconds (for selecting `16384` elements) using `jax.numpy`, while using `numpy` only takes `0.005` second.
For this line (take 256 elements - with image data)
https://github.com/huggingface/transformers/blob/650fb4aa2adfe746f8c3e1aec1d601c4a0e9f40c/examples/flax/image-captioning/run_image_captioning_flax.py#L348
jax.numpy: 0.45 second / numpy: 0.10 - 0.15 second
A singe training step (global batch size: 256 images) takes < 0.5 seconds on TPU.
Due to this significant differences in processing speed, I think it is worth keeping using `numpy` when dealing with `datasets`.
Let me know if you have different opinions about this :-)
<|||||>Hi
- support `from_pretrained` (the model creation from encoder/decoder is done in another script)
- README updated
- rename to `coco_dataset_script` to avoid confusion
- other nits applied
Thanks for the reviews :-) |
transformers | 14,863 | closed | Open discussion for design decisions. | # 🚀 Feature request
Hi all,
I'd like to ask some questions about design decisions that were made generally for this library.
**First one:**
Why did you couple the forward directly with the loss calculation? I don't really see any benefits doing this, rather downsides for example when you want to extend the `Trainer` class with a specific loss that might require specific input you always need to make sure that the `__call__` method of the model can also calculate the loss before being able to calculate your own loss.
[Roberta Example](https://github.com/huggingface/transformers/blob/19e5ed736611227b004c6f55679ce3536db3c28d/src/transformers/models/roberta/modeling_roberta.py#L1185)
[Tutorial to extend Trainer](https://huggingface.co/docs/transformers/main_classes/trainer)
**Second one:**
Also somehow related to the first one. Why do you assume different input formats for `multi label classification` and `single label classification`. Why can't you just assume that input labels for classification tasks are always one hot encoded?
[Roberta Example](https://github.com/huggingface/transformers/blob/19e5ed736611227b004c6f55679ce3536db3c28d/src/transformers/models/roberta/modeling_roberta.py#L1236)
## Motivation
I'm trying to implement a small training framework for a specific use case around this package. However things like this above make it really hard for me to have a clean implementation that is not just a list of if and else statements.
@abhishekkrthakur @sgugger @LysandreJik | 12-21-2021 16:05:51 | 12-21-2021 16:05:51 | Hi @pafi-code, regarding your first question: the loss calculation is coupled with the forward but it is opt-in. The models only perform the loss calculation when you pass the `labels`, which is not a requirement. You're perfectly free to retrieve the logits and compute your loss yourself, outside of the model.
I'll let @sgugger answer regarding the `Trainer`, and @abhishekkrthakur regarding the multi-label classification vs single-label classification.
Excited to hear you're implementing a training framework around this package, don't hesitate to share with us when you're happy with it!<|||||>Hi @LysandreJik thanks for your reply! :+1:
Regarding the first one:
Oh lol, I didn't see line `1221` to be honest. However by default the trainer always passes also the labels to the forward right? So if I would not like that to happen, I would always need to extend the trainer such that the model get's inputs without the labels right?
Regarding the second one:
Since you just told me that if I don't pass the labels through the forward, no loss is calculated. So I could simply implement it on my own assuming there are always one hot encoded labels, which should do the trick for me. However I'm still curious why the decision was made that way :smile:
Unfortunately this will most likely not be public. :(<|||||>First of all, questions like this should be better asked on the [forums](https://discuss.huggingface.co/) as we keep issues for bugs and feature requests only.
Regarding your last questions, labels are not one-hot encoded because this is vastly memory inefficient and the GPU memory is limited. Instead of having a tensor of size ` batch_size`, you get a tensor of size `batch_size x num_labels`. It's fine if you have 2 labels, less fine when you have a thousand and a guarantee of OOM when you are training a language model with a vast vocabulary size.
Besides the cross-entropy loss in PyTorch expects labels that are not one-hot encoded, so we couldn't use the proper loss if we did one-hot encode labels.<|||||>Alright, I'll remember for the next time!
Okay that answers my question pretty good. Totally forgot about those aspects.
Thanks! :+1: |
transformers | 14,862 | closed | Cache the files in get_fast_tokenizer_file() | # 🚀 Feature request
`transformers.BertTokenizer.from_pretrained()` calls `get_fast_tokenizer_file()` which downloads a file from the HuggingFace server but never adds it to the cache.
Would be useful to cache that file (in the same folder as the models) to make CI runs more robust.
## Motivation
Try to avoid having issues such as the one below in our CI runs:
```
E requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/models/mrm8488/bert-small-finetuned-squadv2
```
| 12-21-2021 14:10:10 | 12-21-2021 14:10:10 | Hello! This does cache the file, but you're unfortunately hitting an error on our side when checking that the file exists and its sha (cc @julien-c, @n1t0). If this happens, I recommend leveraging `local_files_only` as a parameter to `from_pretrained`, as this will only use local files.
Sorry for the inconvenience.<|||||>AFAICT it always goes in here: https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils_base.py#L1654
Then
https://github.com/huggingface/transformers/blob/c94c1b89674f2b15b23c8c4ce30f036bf883717f/src/transformers/tokenization_utils_base.py#L3486
It checks this https://github.com/huggingface/transformers/blob/c94c1b89674f2b15b23c8c4ce30f036bf883717f/src/transformers/file_utils.py#L2086
But that's in the current work directory, not the cache. (I think that might be the problem?)
And if it can't find it, it will download the files but not save them, so the following time it will re-download them again.<|||||>Yes, but this is only to get the name of the file to download. Once it has it (no download has occurred), it continues on to this code snippet:
https://github.com/huggingface/transformers/blob/c94c1b89674f2b15b23c8c4ce30f036bf883717f/src/transformers/tokenization_utils_base.py#L1688-L1704
It's using `cached_path`, so it should cache the file correctly!<|||||>I agree the vocab files are cached, however `get_list_of_files()` before that isn't and will always try to connect to the HuggingFace server.
We get several failures a day in our CI because of that specific call (Even though the other files are cached on the builder)<|||||>I think the error comes from the file check rather than the file download - the failure can definitely happen when checking the files on a remote repository, which I understand can make your CI fail.
We're in the process of implementing a retry mechanism on such issues that should partially solve this - thanks for raising the issue, and I hope we can deliver a more robust mechanism very soon.<|||||>Infra team is on it too (i.e on the occasional server side errors)<|||||>i would like have a mode that tries to do local_files_only but if files are missing it can still fetch them<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>This is still happening and still needs to be fixed |
transformers | 14,861 | closed | [Wav2vec2] RuntimeError: CUDA error: an illegal memory access was encountered | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: master
- Platform: ubuntu
- Python version: 3.8
- PyTorch version (GPU?): 1.10
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART: @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @LysandreJik
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patrickvonplaten @anton-l
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts: (give details below) speech recognition ctc
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name) commonvoice
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
python -m torch.distributed.launch --nproc_per_node=1 run_speech_recognition_ctc.py \
--dataset_name="common_voice" \
--model_name_or_path="facebook/wav2vec2-xls-r-1b" \
--dataset_config_name="de" \
--output_dir="./wav2vec2-xls-r-1b-german" \
--overwrite_output_dir \
--num_train_epochs="15" \
--per_device_train_batch_size="12" \
--gradient_accumulation_steps="1" \
--learning_rate="3e-4" \
--warmup_steps="500" \
--evaluation_strategy="steps" \
--text_column_name="sentence" \
--save_steps="400" \
--layerdrop="0.0" \
--save_total_limit="3" \
--freeze_feature_extractor \
--gradient_checkpointing \
--fp16 --fp16_opt_level "03" \
--group_by_length \
--do_train --do_eval \
--sharded_ddp simple \
--logging_steps=10 \
--eval_steps=25000 \
--max_train_samples=5000 --max_eval_samples=5000 \
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
Traceback (most recent call last):
File "run_speech_recognition_ctc.py", line 649, in <module>
main()
File "run_speech_recognition_ctc.py", line 600, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1325, in train
tr_loss_step = self.training_step(model, inputs)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1884, in training_step
loss = self.compute_loss(model, inputs)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1916, in compute_loss
outputs = model(**inputs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/fairscale/nn/data_parallel/sharded_ddp.py", line 224, in forward
return self.module(*inputs, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1618, in forward
outputs = self.wav2vec2(
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1244, in forward
attention_mask = self._get_feature_vector_attention_mask(
File "/opt/conda/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1082, in _get_feature_vector_attention_mask
attention_mask[(torch.arange(attention_mask.shape[0], device=attention_mask.device), output_lengths - 1)] = 1
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 12-21-2021 07:25:24 | 12-21-2021 07:25:24 | Hey @flozi00,
This looks like a difficult error. Can we try to debug it step-by-step?
1. Does it work **without** ` --sharded_ddp simple` and in a single-GPU environment (without `python -m torch.distributed.launch` - just `python run_speech_recognition_ctc.py`
2. If yes, does it work **without** ` --sharded_ddp simple` and in a multi-GPU environment (with `python -m torch.distributed.launch`)
3. If yes as well then it's `--sharded_ddp`. BTW I've never tested `sharded_ddp` with Wav2Vec2.
What do you need it for exactly?
Also why do you use `--nproc_per_node=1 ` This should be set to the number of GPUs and in case there is just one GPU it's unnecessary to use DDP in general<|||||>Its not working with `python -m torch.distributed.launch` in general on my machine<|||||>Ok, this should definitely work. How many GPUs do you have?
I'll give it a try on two GPUs tonight!
<|||||>Ok just tried the following command on two TITAN RTX 24GB RAM:
```bash
#!/usr/bin/env bash
python -m torch.distributed.launch \
--nproc_per_node=1 run_speech_recognition_ctc.py \
--dataset_name="common_voice" \
--model_name_or_path="facebook/wav2vec2-xls-r-1b" \
--dataset_config_name="ab" \
--output_dir="./wav2vec2-xls-r-1b-german" \
--overwrite_output_dir \
--num_train_epochs="5" \
--per_device_train_batch_size="12" \
--gradient_accumulation_steps="1" \
--learning_rate="3e-4" \
--warmup_steps="500" \
--evaluation_strategy="steps" \
--text_column_name="sentence" \
--save_steps="400" \
--layerdrop="0.0" \
--save_total_limit="3" \
--freeze_feature_extractor \
--gradient_checkpointing \
--fp16 --fp16_opt_level="03" \
--group_by_length \
--do_train --do_eval \
--logging_steps=10 \
--eval_steps=25000 \
--max_train_samples=50 --max_eval_samples=50 \
```
and it works fine.
My env is as follows:
```
- `transformers` version: 4.15.0.dev0 (current master)
- Platform: Linux-5.3.0-64-generic-x86_64-with-glibc2.17
- Python version: 3.8.12
- PyTorch version (GPU?): 1.10.0+cu102 (True)
- Tensorflow version (GPU?): 2.6.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.3.4 (cpu)
- Jax version: 0.2.19
- JaxLib version: 0.1.70
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
and
```
- `datasets` version: 1.16.2.dev0 (current master)
- Platform: Linux-5.3.0-64-generic-x86_64-with-glibc2.17
- Python version: 3.8.12
- PyArrow version: 6.0.1
```<|||||>Note that I use the `ab` config as it's a small dataset and easy to test. Besides that I've only removed the `--sharded_ddp` option. Can you verify whether the above script works for you? <|||||>With larger dataset and many steps it even happens with the single node setup.
I think I need to reset my machine, maybe there is something wrong with cuda<|||||>@flozi00 you can also try running the script as `CUDA_LAUNCH_BLOCKING=1 python -m torch.distributed.launch ...` as the error suggests, to hopefully catch the exact line where it happens (otherwise the stack trace returns an incorrect line due to asynchronous execution) <|||||>I did it, here is the new stacktrace
```
Traceback (most recent call last):
File "run_speech_recognition_ctc.py", line 649, in <module>
main()
File "run_speech_recognition_ctc.py", line 600, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1325, in train
tr_loss_step = self.training_step(model, inputs)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1884, in training_step
loss = self.compute_loss(model, inputs)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1916, in compute_loss
outputs = model(**inputs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1618, in forward
outputs = self.wav2vec2(
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1239, in forward
extract_features = self.feature_extractor(input_values)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 442, in forward
hidden_states = conv_layer(hidden_states)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 317, in forward
hidden_states = self.layer_norm(hidden_states)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/normalization.py", line 189, in forward
return F.layer_norm(
File "/opt/conda/lib/python3.8/site-packages/torch/nn/functional.py", line 2446, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: CUDA error: an illegal memory access was encountered
```<|||||>What command did you run to get this stack trace?<|||||>```
CUDA_LAUNCH_BLOCKING=1 python run_speech_recognition_ctc.py \
--dataset_name="common_voice" \
--model_name_or_path="facebook/wav2vec2-xls-r-1b" \
--dataset_config_name="de" \
--output_dir="./wav2vec2-xls-r-1b-german" \
--overwrite_output_dir \
--num_train_epochs="15" \
--per_device_train_batch_size="12" \
--gradient_accumulation_steps="1" \
--learning_rate="3e-4" \
--warmup_steps="500" \
--evaluation_strategy="steps" \
--text_column_name="sentence" \
--save_steps="400" \
--layerdrop="0.0" \
--save_total_limit="3" \
--freeze_feature_extractor \
--gradient_checkpointing \
--fp16 --fp16_opt_level "03" \
--group_by_length \
--do_train --do_eval \
--logging_steps=10 \
--eval_steps=25000 \
--max_train_samples=5000 --max_eval_samples=5000
```<|||||>Hmm - okey not really sure. BTW, if you run this command on multiple GPUs it'll automatically the Trainer in [DP](https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html) which is known to have some bugs.
Maybe:
```bash
CUDA_LAUNCH_BLOCKING=1 CUDA_VISIBLE_DEVICES="0" python run_speech_recognition_ctc.py \
--dataset_name="common_voice" \
--model_name_or_path="facebook/wav2vec2-xls-r-1b" \
--dataset_config_name="de" \
--output_dir="./wav2vec2-xls-r-1b-german" \
--overwrite_output_dir \
--num_train_epochs="15" \
--per_device_train_batch_size="12" \
--gradient_accumulation_steps="1" \
--learning_rate="3e-4" \
--warmup_steps="500" \
--evaluation_strategy="steps" \
--text_column_name="sentence" \
--save_steps="400" \
--layerdrop="0.0" \
--save_total_limit="3" \
--freeze_feature_extractor \
--gradient_checkpointing \
--fp16 --fp16_opt_level "03" \
--group_by_length \
--do_train --do_eval \
--logging_steps=10 \
--eval_steps=25000 \
--max_train_samples=5000 --max_eval_samples=5000
```
works?
But otherwise I really don't know - the command **does** work for me.<|||||>turned out that batch of 4 is running fine, strange.
I tried using deepspeed zero for larger batches but that's returning out of memory at init state.
I think I need to setup a clean machine with fresh cuda, hopefully fixing it |
transformers | 14,860 | closed | Huggingface Transformers fastTokenizer for DeBERTa v3 | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
I want to try NER for custom dataset using huggingface transformers DeBERTa v3 xsmall.
I need to use fastTokenizer, but it is not available.
I need to use fastTokenizer for the return_offsets_mapping function.
Do you have plans for releasing fastTokenizer for DeBERTa v3?
When I run the below code,
`train_encodings = tokenizer(train_texts, is_split_into_words=True, return_offsets_mapping=True, padding=True, truncation=True, max_length=MAX_LENGTH)`
I get the below is the error message:
`NotImplementedError: return_offset_mapping is not available when using Python tokenizers.To use this feature, change your tokenizer to one deriving from transformers.PreTrainedTokenizerFast.`
| 12-21-2021 04:33:09 | 12-21-2021 04:33:09 | Found #14712 for the same request<|||||>Let's centralize the discussion on #14712 if you don't mind :)<|||||>i have same problem |
transformers | 14,859 | closed | A potential bug in ModuleUtilsMixin.get_extended_attention_mask | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.13.0
- Platform:
- Python version: 3.8.5
- PyTorch version (GPU?): 1.10.0+cu102
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
@LysandreJik
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART: @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @LysandreJik
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): T5
There is a potential bug in ModuleUtilsMixin.get_extended_attention_mask, and actually, it has happened to me while I've trained the T5 model from scratch. In the function, it masks a tensor by setting a large negative number(-1e-4), since it will be added to the raw scores before the softmax.
However, occasionally the value -1e4 is not small enough to nullify the scores in masked positions. In my case, some values in the raw scores before the softmax were small then -1e4 during training, so the model couldn't be trained correctly.
Here is the code I mentioned: [link](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_utils.py#L295-L302)
I think you use -1e4 because of fp16 compatibility, then how about dividing the case based on dtype like in [the code](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_utils.py#L233-L236).
## To reproduce
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
the function get_extended_attention_mask uses the smaller number to mask a tensor.
<!-- A clear and concise description of what you would expect to happen. -->
| 12-21-2021 03:55:14 | 12-21-2021 03:55:14 | I understand that the choice of 1e-4 may be suboptimal and that the masking may not be enough. I believe we've had this conversation before but I can't find it anywhere, I'm open to revisiting it.
WDYT @patrickvonplaten @sgugger @patil-suraj @stas00 <|||||>I agree 1e-4 might not be the best choice for every model. I think it's a good idea to handle this based on the `dtype`
Also, I think maybe we should add an argument `masked_value` to `get_extended_attention_mask` to specify the default mask value for each model from its original implementation.<|||||>@LysandreJik related discussion #10484<|||||>What @patil-suraj said, we for example already do it here:
https://github.com/huggingface/transformers/blob/033c3ed95a14b58f5a657f5124bc5988e4109c9f/src/transformers/modeling_utils.py#L233-L236<|||||>I'm also curious what you guys think is the best move here.
IMO, the -10e4 comes from the original Google implementation of BERT and we just copied it everywhere. However I've now seen a couple of issues related to this. In addition to the one @patil-suraj posted we also have this one https://github.com/huggingface/transformers/issues/14521#issuecomment-992428858
@LysandreJik @sgugger @stas00 @patil-suraj - do you think it could make sense to change -10e4 to the minimum value of that dtype? <|||||>Makes sense to me!<|||||>Could it be that the original implementation had this hardcoded because it worked with a single specific dtype?
I think it makes a total sense to use the logic in https://github.com/huggingface/transformers/issues/14859#issuecomment-998969055<|||||>I think leveraging the minimum value for the `dtype` as @patrickvonplaten would work too and be quite better than the if/else statement, right? Something like `torch.finfo(self.dtype).min`<|||||>> `torch.finfo(self.dtype).min`
Loving it!
We should do the same for all other places we have the conditional for when we that for masking and where no weights will be impacted by this change.
<|||||>Ok, happy to do a PR for this next week - putting it on my ToDo list<|||||>Centralizing all discussions on this:
https://github.com/huggingface/transformers/issues/9594
https://github.com/huggingface/transformers/issues/10484
https://github.com/huggingface/transformers/issues/15199<|||||>For people using GPT2 Model, there is a hacked hot fix of this issue:
```python
for name, buffer in model.named_buffers():
if '.masked_bias' in name:
buffer.data = torch.tensor(float('-inf'))
```
Add these codes after model initialization.<|||||>I won't be able to look into this until the next 3,4 weeks - still have it on my ToDo list though. Depending on how important this is, feel free to take over whoever is interested<|||||>Any ideas which `transformers` versions are affected?<|||||>We've always used -10_000 from the very beginning for the `attention_mask` so changing this now would affect all transformers versions<|||||>Hi, this is on my TODO list - I have a few remaining things to finalize about PT/TF/Flax more aggressive testings, and will come back to this issue!<|||||>Start working on it :-)<|||||>@jk-jung
This is fixed now (finally) :-)<|||||>When i install version 4.29.0.dev0, there is a bug in modeling_utils.py in line902:
TypeError: torch.iinfo() requires an integer input type. Use torch.finfo to handle 'torch.iinfo'
Could you plz give some advices? @ydshieh <|||||>@FayeXXX Could you show us a short code snippet to reproduce the issue please? Thank you! |
transformers | 14,858 | closed | [doc porting] several docs | Ported to mdx:
```
docs/source/debugging.rst
docs/source/testing.rst
docs/source/main_classes/deepspeed.rst
```
fixed a few small code block breakages in the first one.
@sgugger | 12-21-2021 02:51:34 | 12-21-2021 02:51:34 | Amazing that you noticed all those issues, @sgugger - I missed all of those.
It looks like we need to be careful with suggestions that include "```", it looks like github is buggy there, ate the "````" you added since it's also part of suggestion - had to switch to 4 backticks suggestion. |
transformers | 14,857 | closed | Only create the model card on process 0 | # What does this PR do?
This PR makes sure the model card is only created and saved on process 0.
Fixes #14840 | 12-21-2021 02:49:56 | 12-21-2021 02:49:56 | Thank you @sgugger and Merry X'mas :) |
transformers | 14,856 | closed | [Generate] Remove attention_mask and integrate model_main_input_name | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Final clean-up of generate leveraging the `main_input_name` PR: https://github.com/huggingface/transformers/pull/14803 to:
- Remove `attention_mask` hack for vision models
- clean up the trainer_seq2seq to fix: https://github.com/huggingface/transformers/issues/13825
- clean up hard-coded input names in `generate`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-20-2021 22:24:34 | 12-20-2021 22:24:34 | The changes to `Seq2SeqTrainer` also fix a problem with `predict_with_generate` for `VisionEncoderDecoder`.
With 4.15.0, setting `predict_with_generate=True`does not work with `VisionEncoderDecoder` and `Seq2SeqTrainer`.
<|||||>Hey @cgawron,
Do you have an issue with current master or is everything resolved? :-) Happy to make some more changes to make VisionEncoderDecoder work, but if I understand correctly everything is fine now no? <|||||>@patrickvonplaten Yes, current master works fine for me.<|||||>@cgawron for me it also works fine with v4.15, not sure what the issue is? |
transformers | 14,855 | closed | Make the onnx submodule init lazy | # What does this PR do?
The `onnx` submodule does not use a lazy init like the other ones. This results in importing a given model/tokenzier/config, which imports OnnxConfig, initializing the onnx submodule completely, and in turn importing `PreTrainedModel` and `TFPreTrainedModel` (so PyTorch and TensorFlow).
This PR solves this issue by making the init lazy like all the others. | 12-20-2021 20:48:48 | 12-20-2021 20:48:48 | |
transformers | 14,854 | closed | [Bart] better error message | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes https://github.com/huggingface/transformers/issues/13953
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-20-2021 19:22:09 | 12-20-2021 19:22:09 | |
transformers | 14,853 | closed | replace native python deprecated floor function by torch version | # What does this PR do?
Removes deprecation warning in LayoutLMV2 runs, caused by deprecated native floor divide.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
I ran the tests and the linter !
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@NielsRogge (Since I Know you wrote it, thanks a ton by the way !) | 12-20-2021 19:11:18 | 12-20-2021 19:11:18 | @NielsRogge @LysandreJik Any updates/reasons not to merge this ?<|||||>@LysandreJik according to the [docs](https://pytorch.org/docs/stable/generated/torch.div.html) of PyTorch's torch.div:
> "floor" - rounds the results of the division down. Equivalent to floor division in Python (the // operator)
So that seems good to me. However, in which PyTorch version was the `rounding_mode` argument added?
If it was only added in torch 1.10, then this will break on any previous PyTorch version.<|||||>That's not what the warning says though:
```
>>> import torch
>>> torch.tensor(2) // torch.tensor(1)
__main__:1: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
tensor(2)
```
Your last comment @NielsRogge is aligned with @patrickvonplaten's comment here: https://github.com/huggingface/transformers/pull/14577#issuecomment-986587459<|||||>So looking at the linked issue, it seems the custom division function that takes into account the torch version has not been written yet. Should I write it ?
There's probably no better fix but my worry is that using a custom tensor div function may hinder the readibility of the code a bit. Any suggestions on how to make this change as transparent as possible ?<|||||>As for the "floor" vs "trunc" debate, it only behaves differently with negative input values so it shouldn't change a thing in this case. In a more general case (if we write a function in the file_utils.py), I am guessing it would be better to keep the exact behavior by using "trunc" by default, and enable "floor" as an argument. We would however have to rewrite the "floor" behaviour for the native python division in torch versions < 1.8<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Sorry just getting back to this now - could you leverage the code introduced in https://github.com/huggingface/transformers/pull/15180 to keep support for previous PyTorch versions?<|||||>Messed up the rebase, I'll just create a new PR #15457 for cleanliness and close this one |
transformers | 14,852 | closed | Replace commit sha by commit url for update jobs | # What does this PR do?
As requested by @julien-c , this PR replaces the commit shas by the full urls in the update jobs:
- for the documentation
- for the transformers metadata dataset | 12-20-2021 18:58:15 | 12-20-2021 18:58:15 | |
transformers | 14,851 | closed | Fine-tune GPT-J with SageMaker Model Parallelism | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.11.0
- Platform: SageMaker
- Python version: 3.8
- PyTorch version (GPU?): 1.9.0
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@philschmid and @patil-suraj
Models: GPT-J
## Information
I am trying to fine-tune GPT-J on SageMaker using Model Parallelism and I am getting CUDA OOM errors when loading the model. **I have confirmed that my code works with GPT-2**.
I'm on a relatively new account and am in the process of increasing my instance allocations; however, I have attempted to fine-tune GPT-J on one `ml.p3.16xlarge` (8 V100 16GB GPUs) instance and then on two `ml.p3.16xlarge` instances.
In both cases, I set the per device batch size to 1 and `fp16` to `True`. I am also only using a single partition of the model.
I expected GPT-J to exceed 96GB, but I did not think that loading the model alone would exceed 128GB (i.e. I expected to at least be able to fit the model into memory on a single `ml.p3.16xlarge`). And the model should definitely fit one two `ml.p3.16xlarge`s. That said, when I tried to train on two instances, my logs indicated that the model was loaded on each instance -- in contrast, I expected a single initialization of the model to be spread across 16 GPUs (I interpret this as a sign of user error...).
I've scoured the Hugging Face and SageMaker docs and haven't found a solution to my problem, so I am hoping for a little guidance. I assume the problem is related to one or more of the following possibilities:
* I am misusing SageMaker's Model Parallelism
* I simply need more VRAM.
* SageMaker's Model Parallelism is doing something inefficient with GPT-J
The problem arises when using:
* [x] my own modified scripts: I am using a very lightly modified version of a SageMaker mod of [run_clm.py](https://github.com/aws/amazon-sagemaker-examples/blob/master/sagemaker-training-compiler/huggingface/pytorch_multiple_gpu_single_node/scripts/run_clm.py), which is associated with a tutorial on SageMaker's Training Compiler. The only additional modification I made was to import `SageMakerTrainer` and `SageMakerTrainingArguments` as follows:
```python
from transformers.sagemaker import SageMakerTrainingArguments as TrainingArguments
from transformers.sagemaker import SageMakerTrainer as Trainer
```
The tasks I am working on is:
* [x ] an official GLUE/SQUaD task: sst2
## To reproduce
Steps to reproduce the behavior:
I am starting my SageMaker Training job with the HuggingFace SDK.
1. **Specify hyperparameters**
```python
from sagemaker.huggingface import HuggingFace, TrainingCompilerConfig
INSTANCE_TYPE = "ml.p3.16xlarge" # ml.p3.8xlarge is easily available. However, p3.16xlarge provides better performance.
per_device_batch_size = 1
num_gpus_per_instance = 8
learning_rate = (
float("5e-5") / 32 * per_device_batch_size * num_gpus_per_instance
)
hyperparameters = {
"tokenizer_name": "EleutherAI/gpt-j-6B",
"model_name_or_path": "EleutherAI/gpt-j-6B",
"dataset_name": "glue",
"dataset_config_name": "sst2",
"do_train": True,
"do_eval": True,
"fp16": True,
"per_device_train_batch_size": per_device_batch_size,
'per_device_eval_batch_size': per_device_batch_size,
"learning_rate": learning_rate,
"num_train_epochs": 1,
"block_size": 700,
"overwrite_output_dir": True,
"save_strategy": "no",
"logging_strategy": "epoch",
"output_dir": "/opt/ml/model",
"max_train_samples": 100,
"cache_dir":"/tmp",
'max_grad_norm': 0,
'preprocessing_num_workers': 1,
}
```
2. **Specify MP distribution**
```python
# configuration for running training on smdistributed model parallel
mpi_options = {
"enabled" : True,
"processes_per_host" : num_gpus_per_instance
}
smp_options = {
"enabled":True,
"parameters": {
"microbatches": 1,
"placement_strategy": "spread",
"pipeline": "interleaved",
"optimize": "memory",
"partitions": 1,
"ddp": True,
}
}
distribution={
"smdistributed": {"modelparallel": smp_options},
"mpi": mpi_options
}
```
3. **Initialize the estimator**
```python
# configure the training job
optimized_estimator = HuggingFace(
entry_point="run_clm_orig_mp.py", # Wrapper around training script that enables multi GPU training
source_dir="../scripts",
instance_type=INSTANCE_TYPE,
instance_count=1,
role=role,
volume_size=500,
py_version="py38",
transformers_version="4.11.0",
pytorch_version="1.9.0",
hyperparameters=hyperparameters,
distribution = distribution,
debugger_hook_config=False, # Disabling SageMaker Debugger to avoid overheads during benchmarking
)
```
4. **Start Training job**
```python
# start the training job
optimized_estimator.fit(wait=False)
optimized_estimator.latest_training_job.name
```
## Expected behavior
I expected to GPT-J to be distributed across N devices and then fine-tuned.
| 12-20-2021 18:04:16 | 12-20-2021 18:04:16 | Hi @joehoover -
Thanks for raising this issue! Couple of things based on the above configuration setup -
1. Can you try with single instance training job with SMP configuration with partitions = number of devices, disable ddp by setting to False and set processes_per_host = <number of GPU devices> for the first experiment and see if the partition fits to the single GPU Memory. Please note - Single GPU memory for p3.16xlarge is 16GB/GPU. With above setup you still have single partition and given the size of FP16 quantized GPT-J model, it might not fit to single GPU memory and thus OOM. You might need to tune the number of partitions value once OOM error goes away.
2. Please attach the training script that you are using so that I can guide you further.
I will also try to reproduce the issue at my end and let you know the results.
Looking forward to your response!<|||||>Hi @dhawalkp, thanks for looking into this! I think you've already cleared up some confusion on my end; I somehow misunderstood the definition of partitions in SageMaker's model parallelism library. For some reason, I thought partitions referred to replications of the model, e.g. I thought `partitions = 2, ddp=True` would implement 2 way data parallelism.
I tried rerunning with your suggested hyperparameters and I still hit an OOM error; however, the discrepancy between the memory allocated and the memory available decreased substantially:
```bash
2021-12-21T10:20:21.096-05:00
Copy
[1,0]<stderr>:RuntimeError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 15.78 GiB total capacity; 14.34 GiB already allocated; 22.00 MiB free; 14.35 GiB reserved in total by PyTorch)
```
I'm trying again with a much smaller block size (changed from `700` to `250`) and I'll update my response with the results of that test.
I've also attached my complete training script, which I converted to .txt so I could upload to github.
[run_clm_orig_mp.txt](https://github.com/huggingface/transformers/files/7756450/run_clm_orig_mp.txt)
If you don't mind another question, perhaps you could clarify how I would specify 2-way data parallelism and 4 way model parallelism with the library. I understand that that would involve 8 processes and therefore 8 devices. And I now understand (I think) that setting `partitions=8` and `ddp = False` specifies 8-way model parallelism. But, I'm not sure how I would switch to 2-way data parallelism and 4-way model parallelism.
Thanks!
### Update
Ran into the same OOM error with `block=250`. I'm now trying `instances=2` and `partitions=16`.
### Update 2
Running into resource issues when requesting 2 `ml.p3.16xlarge` instances. I'll report back when the request goes through.<|||||>Hi Joe -
I can reproduce the CUDA out of memory even after trying with partitions= number of GPUs with 2 machines in the cluster. I am checking this issue further and working with sagemaker product team on this. Will keep you posted.
Thanks<|||||>Hey @dhawalkp, thanks so much for the update! <|||||>Hi Joe -
I analyzed the stack trace of the thread that errored with CUDA OOM. Based on the stack trace, the initial Model tracing part is erroring out because of the size of the model is 24GB (FP32).The initial model tracing is using FP32 GPT-J model and is using GPU device's memory by default. As the GPU device's Memory is limited to 16GB, its erroring out with OOM. The SageMaker Model Parallel library has added a parameter called as trace_device which if set to CPU, the initial tracing will happen on the host memory instead of GPU's memory. I will perform another test with HF Estimator with trace_device=cpu and see if everything runs good.
Another option is to adopt latest Prebuilt SageMaker container image for PyTorch (https://github.com/aws/deep-learning-containers/releases/tag/v1.12-pt-1.8.1-tr-gpu-py36) and extend the below example of training the HF Models with Model parallelism using SageMaker PyTorch prebuilt container. You will have to modify your training script as per the example shown here - https://github.com/aws/amazon-sagemaker-examples/blob/8b4789a6a91c8fd3b342a8749619cfde53875666/training/distributed_training/pytorch/model_parallel/gpt2/train_gpt_simple.py
Example notebook: https://github.com/aws/amazon-sagemaker-examples/blob/8b4789a6a91c8fd3b342a8749619cfde53875666/training/distributed_training/pytorch/model_parallel/gpt2/submit-train-gpt-simple.ipynb
The above example has been demonstrated to train 100B parameters Model size using SMP library with the latest container image.
I will keep you posted on the results of the first option I suggested.
Thanks
<|||||>Hey Dhawalkumar,
I'm trying to figure out how to set `trace_device` to `cpu` but I've not had any luck so far. I took a wild guess and specified ` "trace_device": 'cpu'` in my `modelparallel` specification (i.e. it added it to my distribution specification). However, my logs still indicate that tracing was done on GPU:
```
[1,0]<stdout>:[2022-01-03 19:58:03.507: I smdistributed/modelparallel/torch/worker.py:280] Tracing on GPU. If the model parameters do not fit in a single GPU, you can set trace_device to `cpu`. | [1,0]<stdout>:[2022-01-03 19:58:03.507: I smdistributed/modelparallel/torch/worker.py:280] Tracing on GPU. If the model parameters do not fit in a single GPU, you can set trace_device to `cpu`.
```
I've since tried to identify where `trace_device` should be specified. This led me to SageMaker [docs](https://sagemaker.readthedocs.io/en/v2.21.0/api/training/smd_model_parallel_pytorch.html) for `DistributedModel` and it appears that `trace_device` should be specified when a `DistributedModel` is instantiated.
However, the Transformer's `Trainer` class doesn't seem to pass `trace_device` or kwargs [when it instantiates](https://github.com/huggingface/transformers/blob/f2ab21833f3c1fcc8dea76988bdacd368fca779e/src/transformers/trainer.py#L958) a `DistributedModel` instance.
I very well might be misunderstanding something, but I'm nonetheless still not sure how to specify `train_device` when using the HuggingFace estimator.
Thanks!
<|||||>Hi Joe -
HuggingFace SageMaker container might not have yet support for this parameter - trace_device. We are working on upgrading the container. Meanwhile, please use SageMaker PyTorch container option mentioned in my previous post. That should definitely work.
Thanks<|||||>@dhawalkp in which version did `trace_device` get introduced? <|||||>@philschmid - Its not present in the current version of HF DLC yet. It's in the roadmap.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@josephevans - Checking if there is a separate PR for Latest SageMaker Model Parallel library version support for SageMaker hugging face DLC.<|||||>Hi @joehoover - I have published the notebook for training GPT-J using SageMaker Model Parallel here - https://github.com/aws/amazon-sagemaker-examples/tree/main/training/distributed_training/pytorch/model_parallel/gpt-j. Thanks for collaborating on this! Let me know if this issue is resolved.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,850 | closed | Convert docstrings of modeling files | # What does this PR do?
This is going to be a huge PR (sorry) but all modeling files have docstrings that are connected to the model outputs and the code samples in file_utils, so the conversion from rst to Markdown has to happen for all of those at the same time.
The doc-styler is commented out for now, as it can't deal with example written in Markdown. I will work on re-enabling it as soon as we have finished converting all the documentation.
Separately, this PR deals with an issue of the Return block in some dosctrings being indented at 4 spaces instead of the indentation inside the docstring, which resulted in the Example block as being part of the return block. | 12-20-2021 17:45:35 | 12-20-2021 17:45:35 | Saw with @sgugger and the newline before `"""` isn't important and isn't liked by black.
As seen with Sylvain and Patrick, merging this PR now to prevent any conflicts - let's take a look at `master` once deployed to ensure everything works correctly.<|||||>From what I'm seeing, everything looks correct, including code samples! Thanks for your work, @sgugger! |
transformers | 14,849 | closed | [doc] typo | fix small typo
Fixes: https://github.com/huggingface/huggingface_hub/issues/550 | 12-20-2021 16:26:25 | 12-20-2021 16:26:25 | |
transformers | 14,848 | closed | [ASR example] Improve example + add more examples | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds more examples to ASR README.md
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-20-2021 12:51:43 | 12-20-2021 12:51:43 | |
transformers | 14,847 | closed | Add SD and SV heads for WavLM | # What does this PR do?
This adds `WavLMForAudioFrameClassification` and `WavLMForXVector` as a follow-up to https://github.com/huggingface/transformers/pull/14723 | 12-20-2021 12:30:19 | 12-20-2021 12:30:19 | |
transformers | 14,846 | closed | Train Bart-Large multi-labels | Hello,
I have two similar questions. **First**, I'm trying to fine-tune the facebook/bart-large ([https://huggingface.co/facebook/bart-large](url)) but I want to train with 2 classes:
Example:
```
Text-input: "Hello, this text is about school and education."
Class 1: [X, Y, Z]
Class 2: [A, B, C]
Model answer: Y, C
```
I'm using the run_glue file on my personal training files. My question is: How should I set the labels of the training file to train?
My labels with 1 class (8 labels) are actually 0, 1, 2...7 so my csv-entry is something like:
```
Input - Label
Text1 - 2
Text2 - 6
Text3 - 4
```
(This is an example, my csv is in the glue-like format and work with 1 class)
I don't know how to modify them to train on 2 classes.
**Next** question, I want also to fine-tune the network with 1 class but in multi-label mode.
Example:
```
Text-input: "Hello, this text is about school and education."
Class 1: 0, 1, 2, 3, 4, 5, 6, 7
Model answer: 2, 4, 6
```
I've tried something like this, but it's not working
```
Input - Label
Text1 - [2,3]
Text2 - [6]
Text3 - [1,2,5,6]
```
Thank you, sorry if it is explained a little badly. | 12-20-2021 10:43:11 | 12-20-2021 10:43:11 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hello, thanks for opening an issue and sorry for getting back late to this! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) for a better chance of getting an answer?
Thanks! |
transformers | 14,845 | closed | [WavLM] Fix slow tests | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes slow tests for WavLM. Multiple fine-tuning experiments have been done for WavLM, so we know that the model works fine:
https://huggingface.co/models?other=wavlm_libri_finetune
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-20-2021 10:34:53 | 12-20-2021 10:34:53 | |
transformers | 14,844 | closed | inconsistent BertTokenizer and BertTokenizerFast | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.12.5
- Platform: centos
- Python version: 3.7.6
- PyTorch version (GPU?): CPU
- Tensorflow version (GPU?): CPU
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
## Problem
BertTokenizer and BertTokenizerFast behave differently for unknown tokens
``` python
bert_tokenizer = BertTokenizer.from_pretrained('bert-base-chinese')
bert_tokenizer_fast = BertTokenizerFast.from_pretrained('bert-base-chinese')
content_encodings = bert_tokenizer(text_list, truncation=True, padding='max_length', max_length=64, return_tensors='pt')
content_encodings_fast = bert_tokenizer_fast(text_list, truncation=True, padding='max_length', max_length=64, return_tensors='pt')
```

| 12-20-2021 09:03:55 | 12-20-2021 09:03:55 | - `transformers` version: 4.14.1
- Platform: Darwin 20.3.0
- Python version: 3.8.6
- PyTorch version (GPU?): 1.10.1, no GPU
Using the below code, I was not able to reproduce the differences you show. The two tokenizers resulted in the same output. Maybe upgrade `transformers` and clear your `transformers` cache (default is `~/.cache/huggingface/transformers`)?
Code:
<details>
```
import torch
from transformers import BertTokenizer, BertTokenizerFast
bert_tokenizer = BertTokenizer.from_pretrained('bert-base-chinese')
bert_tokenizer_fast = BertTokenizerFast.from_pretrained('bert-base-chinese')
print(f'bert_tokenizer == bert_tokenizer_fast: {bert_tokenizer == bert_tokenizer_fast}\n')
strings = ['早安 都没人聊天吗🥱', '给大家表演下靓仔打哈欠🥱', '📷每周总有那么两天是开心的〜好起来了🤏']
for string in strings:
print('-'*100, '\n')
print(string)
content_encodings = bert_tokenizer(string, truncation=True, padding='max_length', max_length=64, return_tensors='pt')
content_encodings_fast = bert_tokenizer_fast(string, truncation=True, padding='max_length', max_length=64, return_tensors='pt')
decoded_slow = bert_tokenizer.decode(content_encodings['input_ids'][0].tolist())
decoded_fast = bert_tokenizer_fast.decode(content_encodings_fast['input_ids'][0].tolist())
print('Slow:\n', decoded_slow, sep='')
print('Fast:\n', decoded_fast, sep='')
print(f'\ndecoded_slow == decoded_fast: {decoded_slow == decoded_fast}\n')
```
</details>
Output:
<details>
```
bert_tokenizer == bert_tokenizer_fast: False
----------------------------------------------------------------------------------------------------
早安 都没人聊天吗🥱
Slow:
[CLS] 早 安 都 没 人 聊 天 吗 [UNK] [SEP] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD]
Fast:
[CLS] 早 安 都 没 人 聊 天 吗 [UNK] [SEP] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD]
decoded_slow == decoded_fast: True
----------------------------------------------------------------------------------------------------
给大家表演下靓仔打哈欠🥱
Slow:
[CLS] 给 大 家 表 演 下 靓 仔 打 哈 欠 [UNK] [SEP] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD]
Fast:
[CLS] 给 大 家 表 演 下 靓 仔 打 哈 欠 [UNK] [SEP] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD]
decoded_slow == decoded_fast: True
----------------------------------------------------------------------------------------------------
📷每周总有那么两天是开心的〜好起来了🤏
Slow:
[CLS] [UNK] 每 周 总 有 那 么 两 天 是 开 心 的 〜 好 起 来 了 [UNK] [SEP] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD]
Fast:
[CLS] [UNK] 每 周 总 有 那 么 两 天 是 开 心 的 〜 好 起 来 了 [UNK] [SEP] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD]
decoded_slow == decoded_fast: True
```
</details><|||||>> * `transformers` version: 4.14.1
> * Platform: Darwin 20.3.0
> * Python version: 3.8.6
> * PyTorch version (GPU?): 1.10.1, no GPU
>
> Using the below code, I was not able to reproduce the differences you show. The two tokenizers resulted in the same output. Maybe upgrade `transformers` and clear your `transformers` cache (default is `~/.cache/huggingface/transformers`)?
>
> Code:
>
> Output:
I ran the above code, the problem still exist.
Therefore, I changed to use a new gpu mechine that was newly installed with `torch.__version__ = 1.10.1+cu102, transformers.__version__ = 4.14.1, python = 3.7.3`, and manually downloaded bert-base-chinese model from transformers [repo](https://huggingface.co/bert-base-chinese) to local. Until all things were ready, I ran the above code again, the slow and the fast still show difference. Here is the screenshot

Here is the Output:
```
torch.__version__ = 1.10.1+cu102, transformers.__version__ = 4.14.1
bert_tokenizer == bert_tokenizer_fast: False
----------------------------------------------------------------------------------------------------
早安 都没人聊天吗🥱
Slow:
[CLS] 早 安 都 没 人 聊 天 吗 [SEP] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD]
Fast:
[CLS] 早 安 都 没 人 聊 天 吗 [UNK] [SEP] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD]
decoded_slow == decoded_fast: False
----------------------------------------------------------------------------------------------------
给大家表演下靓仔打哈欠🥱
Slow:
[CLS] 给 大 家 表 演 下 靓 仔 打 哈 欠 [SEP] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD]
Fast:
[CLS] 给 大 家 表 演 下 靓 仔 打 哈 欠 [UNK] [SEP] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD]
decoded_slow == decoded_fast: False
----------------------------------------------------------------------------------------------------
📷每周总有那么两天是开心的〜好起来了🤏
Slow:
[CLS] [UNK] 每 周 总 有 那 么 两 天 是 开 心 的 〜 好 起 来 了 [SEP] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD]
Fast:
[CLS] [UNK] 每 周 总 有 那 么 两 天 是 开 心 的 〜 好 起 来 了 [UNK] [SEP] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD]
decoded_slow == decoded_fast: False
```<|||||>Thanks for raising the issue! Let me ping @SaulLu, who might have better context on what's happening<|||||>I can confirm that I was able to reproduce this issue on a [fresh Colab notebook](https://colab.research.google.com/drive/1KqN09oSEyzyQc1brEmaNpApUzIH76qy-?usp=sharing). Also, the problem is not specific to `bert-base-chinese`, as `bert-base-uncased` also exhibits the same phenomenon.
Code:
```python
import torch
from transformers import BertTokenizer, BertTokenizerFast
model_name = "bert-base-uncased"
bert_tokenizer = BertTokenizer.from_pretrained(model_name)
bert_tokenizer_fast = BertTokenizerFast.from_pretrained(model_name)
print(f'bert_tokenizer == bert_tokenizer_fast: {bert_tokenizer == bert_tokenizer_fast}\n')
strings = ['🥱', '📷🤏', '🦾']
for string in strings:
print('-'*100, '\n')
print(string)
content_encodings = bert_tokenizer(string, truncation=True, padding='max_length', max_length=64, return_tensors='pt')
content_encodings_fast = bert_tokenizer_fast(string, truncation=True, padding='max_length', max_length=64, return_tensors='pt')
decoded_slow = bert_tokenizer.decode(content_encodings['input_ids'][0].tolist())
decoded_fast = bert_tokenizer_fast.decode(content_encodings_fast['input_ids'][0].tolist())
print('Slow:\n', decoded_slow, sep='')
print('Fast:\n', decoded_fast, sep='')
print(f'\ndecoded_slow == decoded_fast: {decoded_slow == decoded_fast}\n')
```
Output:
```
bert_tokenizer == bert_tokenizer_fast: False
----------------------------------------------------------------------------------------------------
🥱
Slow:
[CLS] [SEP] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD]
Fast:
[CLS] [UNK] [SEP] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD]
decoded_slow == decoded_fast: False
----------------------------------------------------------------------------------------------------
📷🤏
Slow:
[CLS] [UNK] [SEP] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD]
Fast:
[CLS] [UNK] [SEP] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD]
decoded_slow == decoded_fast: True
----------------------------------------------------------------------------------------------------
🦾
Slow:
[CLS] [SEP] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD]
Fast:
[CLS] [UNK] [SEP] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD]
decoded_slow == decoded_fast: False
```<|||||>Thank you very much for the clarity of the examples!
I'm currently on vacation, so I can only look deeper into this issue when I come back next week :slightly_smiling_face: <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,843 | closed | T5ForConditionalGeneration lm_head not initialized from pretrained checkpoint | It seems that pretrained T5 checkpoints such as `"t5-small"`, `"t5-base"`, etc., do not contain the `lm_head` parameter.
https://github.com/huggingface/transformers/blob/84ea427f460ffc8d2ddc08a341ccda076c24fc1f/src/transformers/models/t5/modeling_t5.py#L1434-L1438
@patrickvonplaten explained in #3553 that these three parameters are shared, but, while I agree that the encoder and decoder embeddings are shared, I don't think `lm_head` is also shared with them. This is because (1) I couldn't find any such sharing logic in `modeling_t5.py`, and (2) these weights are separate in the original TF checkpoint. | 12-20-2021 07:09:20 | 12-20-2021 07:09:20 | Hey @ZhaofengWu,
For all encoder-decoder models, such as T5, Bart, ..., the `lm_head` of the decoder is shared between the decoder input word embedding if `tie_word_embeddings` is set to `True` in the model's config. You can check the config for `t5-small` [here](https://huggingface.co/t5-small/blob/main/config.json) *e.g.*
If the config doesn't have this parameter set, then one should look at the default value in `configuration_utils.py` here: https://github.com/huggingface/transformers/blob/c1125dc2ba9f3c383bf860ac9fcd67268385ad8d/src/transformers/configuration_utils.py#L256 where it is set to ``True``.<|||||>In TF the weighs are also shared <|||||>Ahh ok, thanks for pointing me to where the tying takes place. Also I didn't realize that T5 v1.1 and T5 have different behaviors here, and had this question since I saw separate parameters in the TF checkpoint for v1.1. |
transformers | 14,842 | closed | Fix dead link to benchmarks.ipynb | Notebook has been updated here https://github.com/huggingface/notebooks/tree/master/examples/benchmark.ipynb
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-20-2021 06:10:06 | 12-20-2021 06:10:06 | |
transformers | 14,841 | closed | Unable to save trained model in path | I am downloading the model <https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384/tree/main> **microsoft/Multilingual-MiniLM-L12-H384** and then using it.
Transformer Version: '4.11.3'
**I have written the below code:**
```
model = tr.BertForSequenceClassification.from_pretrained("/home/pc/minilm_model",num_labels=2)
model.to(device)
print("hello")
training_args = tr.TrainingArguments(
output_dir='/home/pc/proj/results2', # output directory
num_train_epochs=10, # total number of training epochs
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=32, # batch size for evaluation
learning_rate=2e-5,
warmup_steps=1000, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=1000,
evaluation_strategy="epoch",
save_strategy="no"
)
print("hello")
trainer = tr.Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_data, # training dataset
eval_dataset=val_data, # evaluation dataset
compute_metrics=compute_metrics
)
```
**The folder is empty after I train the model.**
**Also, is it okay to pass classes=2 for binary classification?**
**The model last layer is simple linear connection which gives logits value. How to get probability score out of it?**
`model = tr.BertForSequenceClassification.from_pretrained("/home/pchhapolika/minilm_model",num_labels=2)`
| 12-20-2021 05:32:04 | 12-20-2021 05:32:04 | You are using `save_strategy="no"`, so, as requested, the model is not saved. You can add a line `trainer.save_model(xxx)` to manually save it after the training.<|||||>@sgugger I corrected it to `save_strategy="epoch"` and it saves model with every epoch with numbers like `checkpoint-12250, checkpoint-915` so on.
**How can we save only best model?**
**Which checkpoint means which epochs?**
<|||||>You cannot save only the best model, the minimum you can do is use `save_total_timit=1` to only save a maximum of one checkpoint + the best checkpoint if you use `load_best_model_at_end=True`. Using that option will give you the best model inside the `Trainer` at the end of training, so using `trainer.save_model(xxx)` will allow you to save it where you want.
As for your other questions, you can see the numbers are all multiple of 915, so ecpoch n as a chackpoint named checkpoint-{n * 915}, and you have 915 training steps in each epoch.<|||||>> load_best_model_at_end=True
The parameter `load_best_model_at_end=True` will be inside `tr.TrainingArguments()` ?<|||||>Yes, see the [documentation](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.load_best_model_at_end).<|||||>> Yes, see the [documentation](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.load_best_model_at_end).
Thank you. |
transformers | 14,840 | closed | trainer.create_model_card should be run by process 0 only in distributed training | ### Who can help
@gchhablani, @sgugger
## Information
Trainer.create_model_card() does not check which process is running. In distributed training, all processes would try to write into $output_dir/README.md and sometimes cause crashes.
The problem arises when using run_clm.py in distributed training.
## To reproduce
Steps to reproduce the behavior:
1. Install transformers from local source by pip install -e .
2. Make two small changes to create_model_card() in trainer.py, in order to highlight the issue:
(1) added a logging into trainer.py right before writing README.md
(2) appended the output filename with process index.
(Sorry about incorrect indents. I can't make spaces work here)
def create_model_card(self,...):
training_summary = TrainingSummary.from_trainer( self, language=language,...)
model_card = training_summary.to_model_card()
**logger.warning(f"Process {self.args.process_index} writes to README.md!")**
with open(os.path.join(self.args.output_dir, "README.md" **+ str(self.args.process_index))** ), "w") as f:
f.write(model_card)
3. Try run_clm.py with any training task with 2 GPUs, push_to_hub = false, and the modified create_model_card()
4. The end of the script will call trainer.create_model_card(**kwards). You should see from logging both processes tried to create a model card file. The output directory should have two README.md files, one from process 0 and the other from process 1. Without the filename change, the two processes would have tried to write to the same README.md and cause racing in distributed training. I have one of such crashes when training in GPU clusters.
My logging:
```
[INFO|modelcard.py:456] 2021-12-19 19:59:04,324 >> Dropping the following result as it does not have all the necessary fields:
{'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}}
[WARNING|trainer.py:2643] 2021-12-19 19:59:04,325 >> Process 0 writes to README.md!
[WARNING|trainer.py:2643] 2021-12-19 19:59:04,334 >> Process 1 writes to README.md!
[2021-12-19 19:59:07,398] [INFO] [launch.py:160:main] Process 540006 exits successfully.
[2021-12-19 19:59:08,400] [INFO] [launch.py:160:main] Process 540007 exits successfully.
```
Two README.md in my output directory
```
(gptj) REDMOND.meiyang@GCRSANDBOX354:/tmp/model_output$ ll -t /tmp/model_output
total 23753748
drwxr-xr-x 7 xxxxxxx 4096 Dec 19 19:59 ./
-rw-r--r-- 1 xxxxxxx 1108 Dec 19 19:59 README.md1
-rw-r--r-- 1 xxxxxxx 1108 Dec 19 19:59 README.md0
```
## Expected behavior
trainer.create_model_card() should allow only process 0 to write so that only one README.md is written
| 12-20-2021 04:06:10 | 12-20-2021 04:06:10 | Thanks for flagging! Adding some logic for that in the PR mentioned above. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.